仅使用javascript将样机转换为代码

AI is the future and it’s going to take all our jobs. You’ve probably heard that quite a bit as Machine Learning systems replace more and more traditional, manual, and repetitive jobs. You’ll also hear cries of the inevitable loss of entry level programming jobs to our AI friends. But at the same time, most Software Engineers will tell you that’ll never happen, because our jobs are too abstract and complex for a computer to understand.

人工智能是未来,它将接替我们所有的工作。 您可能已经听说了很多,因为机器学习系统取代了越来越多的传统,手动和重复性工作。 您还会听到对AI朋友不可避免地失去入门级编程工作的呼声。 但是同时,大多数软件工程师会告诉您这永远不会发生,因为我们的工作太抽象,太复杂了,计算机无法理解。

Personally, I welcome a happy medium where many repetitive tasks I perform on a daily basis are automated away, so I can focus more on business logic and user functionality. Instead of viewing Machine Learning as a harbinger of mass unemployment, it should be seen as a tool to help us evolve the very definition of a Software Engineer, so we can deliver more quality and value in our products. A large part of my day-to-day work is translating UX mocks into code, which invariably leads to copying lots of code from existing pages and fiddling with layout, spacing, and colors until it looks right. And then I can eventually focus on specific business needs for that particular feature. But if I had a program which generated a component layout based on a given mock, I would save hours, if not days, of development time every week. That would be amazing!

就我个人而言,我很高兴能以一种愉快的方式将我每天执行的许多重复性任务自动化,因此我可以将更多精力放在业务逻辑和用户功能上。 与其将机器学习视为大规模失业的预兆,不应该将其视为帮助我们发展软件工程师的定义的工具,以便我们可以在产品中提供更高的质量和价值。 我日常工作的很大一部分是将UX模拟转换为代码,这总是导致从现有页面复制大量代码,并摆弄布局,间距和颜色,直到看起来正确为止。 然后,我最终可以专注于该特定功能的特定业务需求。 但是,如果我有一个程序可以根据给定的模拟生成组件布局,那么我每周可以节省数小时(甚至不是几天)的开发时间。 这将是惊人的!

A quick Google search will show many companies already working to make this a reality. So I created a small proof-of-concept app using Machine Learning to translate a simple image to React code, all using JavaScript. And it actually works! If someone like me, who has next to no data science experience can create something like this, the future of this space looks bright indeed.

快速的Google搜索将显示许多已经在努力实现这一目标的公司。 因此,我使用机器学习创建了一个小的概念验证应用程序,将所有简单图像转换为React代码,全部使用JavaScript。 它实际上有效! 如果像我这样几乎没有数据科学经验的人能够创造出这样的东西,那么这个领域的未来确实是光明的。

You can find all the code here. And here’s a preview of the final product:

您可以在此处找到所有代码。 以下是最终产品的预览:

Image for post

前提 (Premise)

I wanted to keep things as simple as possible to start, so I trained a model to understand the difference between 4 form components: text-inputs, toggles, checkboxes, and sliders. Once it could distinguish between the 4, I used it to determine what components were present in a newly drawn mock and in what order, then stitched together pre-written JSX in that order to render a basic form (no functions defined yet, just a basic skeleton). It’s far from perfect and I only trained the model for about an hour, but the results were still impressive.

我想使事情尽可能简单地开始,所以我训练了一个模型来理解4个表单组件之间的区别:文本输入,切换,复选框和滑块。 一旦可以区分这4个,我就可以使用它来确定新绘制的模拟中存在哪些组件以及以什么顺序出现,然后按顺序将预先编写的JSX缝合在一起以呈现基本形式(尚未定义函数,只是一个基本骨架)。 它远非完美,我只训练了大约一个小时的模型,但结果仍然令人印象深刻。

I’ll assume a basic understanding of JavaScript, Node, React, and Neural Networks going forward. Of course, all concepts below can be applied to any programming language, but I chose Brain.js and JavaScript for their simplicity. If terminology below doesn’t make sense, I would highly recommend some online tutorials or classes (I took some great Deep Learning classes on Coursera to understand basics).

我将对JavaScript,Node,React和Neural Networks进行基本的了解。 当然,下面的所有概念都可以应用于任何编程语言,但是出于简单性的考虑,我选择了Brain.js和JavaScript。 如果下面的术语没有意义,我强烈推荐一些在线教程或课程(我在Coursera上学习了一些很棒的深度学习课程,以了解基础知识)。

前处理 (Preprocessing)

Arguably the most important (and time consuming) task for any Machine Learning project; having good, plentiful data is crucial.

可以说,对于任何机器学习项目而言,最重要(最耗时)的任务是; 拥有充足的丰富数据至关重要。

For this POC, I drew variations of each component and saved them as 200px by 100px JPEG images. Then I threw in some screenshots of components from various sites and voila, I had 120 images for training. To reduce training noise, I converted all images to gray-scale and to create even more samples, I flipped each picture horizontally, so I had 240 component pictures in total across 4 different types.

对于此POC,我绘制了每个组件的变体并将其另存为200px x 100px JPEG图像。 然后,我从不同站点投了一些组件的屏幕快照,瞧,我有120张图像用于训练。 为了减少训练噪声,我将所有图像都转换为灰度并创建了更多样本,我将每个图片水平翻转,因此共有4种不同类型的240张分量图片。

All original images can be found in the repo under /pics/component-pics and processed images can be found in /pics/training-data.

所有原始图像都可以在仓库中的 /pics/component-pics下找到,而处理后的图像可以在/pics/training-data

训练 (Training)

Passing each image in as inputs to a Convolutional Neural Network model seems like the most obvious route, but at the time of writing, Brain.js doesn’t support CNN models. So for each picture, I used the hog-features library to compute a Histogram of Oriented Gradients (HOG), which is an array of numbers between 0 and 1 representing edge orientation in an image. This lets us convert images into something a standard Neural Network can use, and in some cases can work just as well as a CNN model for detecting features among images.

将每个图像作为输入传递给卷积神经网络模型似乎是最明显的途径,但是在撰写本文时,Brain.js不支持CNN模型。 因此,对于每张图片,我都使用了hog-features库来计算定向梯度直方图 (HOG),该直方图是表示图像边缘方向的介于0和1之间的数字数组。 这使我们可以将图像转换为标准神经网络可以使用的图像,并且在某些情况下可以像用于检测图像中特征的CNN模型一样工作。

Parameters: After testing with different parameters, I ended up with sigmoid activation, hidden layers of [10, 10], and learning rate of 0.2.

参数:用不同的参数测试后,我得到了S型激活,[10,10]的隐藏层以及0.2的学习率。

That’s really all you need to create a model in Brain.js. There’s some extra code for getting the hog-feature inputs, tracking iterations, and saving the model afterwards. But this is the gist of using Brain.js, super simple.

这实际上是在Brain.js中创建模型所需的全部。 还有一些额外的代码可用于获取生猪特征输入,跟踪迭代并随后保存模型。 但这是使用Brain.js的要点,非常简单。

Training on 240 images and about 5000 iterations took roughly an hour on my desktop to reach an error rate of 0.005 (according to Brain.js). Here are some examples of inputs and the results I was seeing:

在我的桌面上,对240张图像和大约5000次迭代的训练大约花费了一个小时,达到0.005的错误率(根据Brain.js)。 以下是一些输入示例和我看到的结果:

Image for post

All Node scripts for training and predicting can be found in /scripts.

可以在/scripts找到所有用于训练和预测的Node /scripts

转换次数 (Conversion)

Once we’re confident with results on training data, we can split a given mock into different components to see how well the model can identify them. I used the split-images library to split a mock into 200px by 100px sections to run through the model. For each type of component that can be identified, I defined pre-written JSX, which is used to produce the final output based on the order of components found in the mock. From there, it’s pretty simple to copy into a React page to render.

一旦对训练数据的结果充满信心,我们就可以将给定的模拟分解为不同的组件,以查看模型如何识别它们。 我使用split-images库将模拟内容拆分为200px x 100px的部分以运行模型。 对于可以识别的每种类型的组件,我定义了预写的JSX,该JSX用于根据模拟中找到的组件的顺序生成最终输出。 从那里,将其复制到React页面进行渲染非常简单。

Here’s another example using some new (poorly) drawn images:

这是另一个使用一些新的(较差)绘制图像的示例:

Image for post

The model’s confidence isn’t as high on new drawings and I found the positioning of the component in each section changed the results quite a bit. This can be improved by creating more training data, with dynamic sizes (instead of just 200x100), and different positions. Also increasing the number and size of hidden layers would let the model pick up more complex features.

在新图纸上,模型的可信度并不高,我发现每个部分中组件的位置都改变了结果。 通过创建更多具有动态尺寸(而不是200x100)和不同位置的训练数据,可以改善这一点。 同样,增加隐藏层的数量和大小也会使模型获得更复杂的特征。

下一步 (Next Steps)

This is a simple implementation, so there’s quite a lot that could be improved in future versions:

这是一个简单的实现,因此在将来的版本中有很多可以改进的地方:

  • Training on more data, to recognize more component types with greater accuracy

    培训更多数据,以更准确地识别更多组件类型
  • More hidden layers to increase what the model can understand (and input images of varying size)

    更多隐藏层以增加模型可以理解的内容(并输入不同大小的图像)
  • Applying the model to a bigger mock, detecting precise locations, and applying correct positioning in the final component

    将模型应用于更大的模型,检测精确位置并在最终组件中应用正确的位置
  • Using templates for the pre-written JSX to create a library of potential outputs (and possibly train the model to recognize different styles and themes)

    使用预写的JSX模板创建潜在输出库(并可能训练模型以识别不同的样式和主题)
  • Add more functionality to the final output, instead of just skeleton components

    在最终输出中添加更多功能,而不仅仅是骨架组件
  • Build a UI interface to accept and predict any image; currently it’s called from a Node script, but Brain.js can be used from the front-end as well

    建立一个UI界面来接受和预测任何图像; 当前是从Node脚本调用的,但是Brain.js也可以从前端使用

If anyone reading would like to work on any of those points or use this code as a base for anything you’re working on, please feel free!

如果任何人想在这些方面上做任何事情,或者将此代码用作您正在从事的任何工作的基础,请随时使用!

结论 (Conclusion)

It took me about a day to create training data and to write the scripts needed to produce a model with passable results, meaning a team of well-trained data scientists could do wonders in this field. I wouldn’t be surprised if such technology becomes common place in just a few years, as it becomes easier to implement basic frameworks and train models with more computing power.

我花了整整一天的时间来创建训练数据并编写脚本,以产生可通过的结果模型,这意味着一组训练有素的数据科学家可以在这一领域做出奇迹。 如果这样的技术在短短几年内变得普及,我将不会感到惊讶,因为它变得更容易实现基本框架和训练具有更多计算能力的模型。

What are you thoughts? I’m extremely excited to see what’s to come.

你有什么想法 我很高兴看到即将发生的事情。

翻译自: https://medium.com/javascript-in-plain-english/turn-mockups-into-code-with-just-javascript-6e9cf1d9220

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值