Qt AI Assistant Experimental Released
Qt AI助手实验版发布
January 23, 2025 by Peter Schneider | Comments
2025年1月23日由彼得·施耐德发表|评论
We have released Qt AI Assistant to help you in cross-platform software development now. The Qt AI Assistant is an AI-powered development assistant that runs in Qt Creator and works with multiple Large Language Models (LLM). This blog post gives a little "behind-the-scenes" view of its making.
我们已经发布了Qt AI Assistant,以帮助进行跨平台软件开发。Qt AI Assistant是一个基于AI的开发助手,在Qt Creator中运行,可与多个大型语言模型(LLM)配合使用。这篇博客文章对它的制作进行了一点“幕后”的介绍。
Why The Qt AI Assistant Is What It Is
为什么Qt AI助手是这样的
When we started to scope the Qt AI Assistant, we wanted to focus on what we believe matters most to developers: being creative and writing great code.
当我们开始研究Qt AI Assistant时,我们想专注于我们认为对开发人员最重要的事情:有创造力和编写优秀的代码。
In many conversations with developers three areas repeatedly came up where Gen AI should help them: getting expert advice, creating unit test cases, and writing code documentation. Furthermore, we often hear that coding assistants should give good examples of the latest QML and Qt Quick functionality. So, we made it our mission to focus with our coding assistant on automating “boring” repetitive developer tasks and providing the latest Qt technology insights while still delivering a coding assistant with generic C++ and Python programming capabilities. By automating complementary tasks such as unit test cases and code documentation, we return a bit more time for the thing that made developers choose the profession in the first place: programming.
在与开发人员的许多对话中,Gen AI反复提出了三个领域应该帮助他们:获得专家建议、创建单元测试用例和编写代码文档。此外,我们经常听到编码助手应该给出最新QML和Qt Quick功能的好例子。因此,我们的使命是让我们的编码助手专注于自动化“无聊”的重复开发任务,并提供最新的Qt技术见解,同时仍然提供具有通用C++和Python编程功能的编码助手。通过自动化单元测试用例和代码文档等补充任务,我们将更多的时间用于开发人员最初选择职业的事情:编程。
Another stakeholder that influenced the software architecture of the Qt AI Assistant was product managers of products built with Qt. Being able to protect the product's competitive advantage is crucial to them. Product Managers are concerned about leaking code in the context of prompts or the outputs of LLMs. Maintaining control by running a private cloud deployment of a Large Language Model was considered essential to protect the company from accidental or intentional intellectual property leaks. By providing the option to connect to privately hosted Large Language Models, we give our customers control over their data. As a matter of fact, we are one of the few coding assistants that do allow self-hosted cloud and local LLMs.
影响Qt AI Assistant软件架构的另一个利益相关者是使用Qt构建的产品的产品经理。能够保护产品的竞争优势对他们来说至关重要。产品经理担心提示或LLM输出中的代码泄漏。通过运行大型语言模型的私有云部署来保持控制被认为对于保护公司免受意外或故意的知识产权泄露至关重要。通过提供连接到私人托管的大型语言模型的选项,我们让客户能够控制他们的数据。事实上,我们是少数几个允许自托管云和本地LLM的编码助手之一。
Implementing the MVP of our AI Assistant
实现我们AI助手的MVP
We started the Qt AI Assistant by developing two things: Firstly, we built the end-to-end pipeline to an LLM that creates good QML code without fine-tuning. Secondly, we implemented the basic code completion functionality that the GitHub Copilot plug-in already offers in the Qt Creator IDE.
我们通过开发两件事来启动Qt AI Assistant:首先,我们构建了一个LLM的端到端管道,该管道可以在不进行微调的情况下创建良好的QML代码。其次,我们实现了GitHub Copilot插件在Qt Creator IDE中已经提供的基本代码完成功能。
The end-to-end pipeline consists of the Qt Creator plug-in, which we built on top of Qt Creator 15, an open-source Language Server, which is already part of the Qt software stack, and Anthropic’s API to Claude 3.5 Sonnet. We decided to call the “Language Server” the “AI Assistant Server” because it consists of much more than a standard language server, including the LLM APIs, prompt routing, prompt engineering, context filtering, and, later on, the multi-agent capabilities.
端到端管道包括Qt Creator插件,我们在Qt Creator15的基础上构建了该插件,这是一个开源语言服务器,已经是Qt软件堆栈的一部分,以及Anthropic的Claude 3.5 Sonnet的API。我们决定将“语言服务器”称为“AI助手服务器”,因为它不仅仅是一个标准语言服务器,还包括LLM API、提示路由、提示工程、上下文过滤,以及后来的多代理功能。
For the code completion work, we started with implementing the suggestions triggered by the keyboard shortcut. We chose this starting point because there was increasing noise in the online forums that some developers find auto-code completing disrupting their usual workflow. We had already implemented the UI elements for the code completion tab bar for the GitHub Copilot plug-in, and therefore, adding that one for the Qt AI Assistant was straightforward.
对于代码完成工作,我们从实现键盘快捷键触发的建议开始。我们选择这个起点是因为在线论坛上的噪音越来越大,一些开发人员发现自动代码完成扰乱了他们通常的工作流程。我们已经为GitHub Copilot插件的代码完成选项卡栏实现了UI元素,因此,为Qt AI Assistant添加该元素很简单。
Nevertheless, no coding assistant is complete with automatic code suggestions. So, automatic code completion was the next feature to be developed. Because automatic code completion can burn quickly through many tokens and disrupt the workflow of some developers, adding the assistant settings was next on the agenda. Initially, we kept it simple, allowing users to turn the assistant and the automatic code completion on and off across all projects.
然而,没有一个编码助手能提供完整的自动代码建议。因此,自动代码补全是下一个要开发的功能。由于自动代码完成可能会快速消耗许多令牌并扰乱一些开发人员的工作流程,因此添加助手设置是下一个议程。最初,我们保持简单,允许用户在所有项目中打开和关闭助手和自动代码完成。
Next was the development of the Inline Prompt Window. This required the UI design team to create a functional model in Figma for some usability tests. Once the design was settled, we started implementing a basic human language prompt interface for expert advice and prepared a drop-down menu for smart commands.
接下来是内联提示窗口的开发。这要求UI设计团队在Figma中创建一个功能模型,用于一些可用性测试。设计确定后,我们开始实现一个基本的人类语言提示界面,用于专家建议,并为智能命令准备了一个下拉菜单。
For the upcoming smart commands, we started to implement the logic in the AI Assistant Server to route prompts based on the type of the smart command and the programming language to different LLMs. Ultimately, we want to enable developers to choose their favorite LLMs by purpose, so we allow different LLMs for code completion for QML, code completion for other programming languages, prompts regarding QML, and prompts regarding other languages.
对于即将到来的智能命令,我们开始在AI Assistant Server中实现逻辑,根据智能命令的类型和编程语言将提示路由到不同的LLM。最终,我们希望开发人员能够有目的地选择他们最喜欢的LLM,因此我们允许不同的LLM用于QML的代码完成、其他编程语言的代码完成,以及关于QML和其他语言的提示。
With the R&D team growing at this stage, we started implementing the connection to Llama 3.1 70B for prompts, i.e., expert advice. The Llama "herd" of models is currently the best-performing "royalty-free" language model for QML programming (I will write more about QML coding quality benchmarking in another blog post). Using a Llama 3.1 model is more complex than entering an authentication token into Anthropic’s API. Still, our AI Engineer ran it quickly in a Microsoft Azure cloud.
随着研发团队在这个阶段的壮大,我们开始实施与Llama 3.1 70B的连接,以获得提示,即专家建议。Llama“herd”模型目前是QML编程中表现最好的“免版税”语言模型(我将在另一篇博客文章中详细介绍QML编码质量基准测试)。使用Llama 3.1模型比在Anthropic的API中输入身份验证令牌更复杂。尽管如此,我们的人工智能工程师还是在微软Azure云中快速运行了它。
Optimizing the Smart Commands
优化智能命令
Another AI Engineer optimized the prompts for the different smart commands, including creating unit test cases in Qt Test syntax. This required setting up a dataset of Qt Test cases for benchmarking the LLM responses and developing the pipeline from the Qt AI Assistant Server to the LLMs for test cases. We focused the prompt engineering on Claude 3.5 Sonnet because it performed the best without fine-tuning. We still need to optimize OpenAIs GPT-4o and Meta’s Llama 3.3 70B for this use case at a later stage.
另一位AI工程师优化了不同智能命令的提示,包括用Qt测试语法创建单元测试用例。这需要建立一个Qt测试用例数据集,用于对LLM响应进行基准测试,并开发从Qt AI Assistant服务器到LLM的测试用例管道。我们将快速工程重点放在Claude 3.5 Sonnet上,因为它在没有微调的情况下表现最佳。在稍后的阶段,我们仍然需要针对这个用例优化OpenAIs GPT-4o和Meta的Llama 3.3 70B。
How to Try Out the Qt AI Assistant
如何试用Qt AI助手
Trying out Qt AI Assistant is straight forward:
试用Qt AI助手很简单:
- Ensure that your Qt Creator is capable of using the Qt AI Assistant - check the version, it needs to be at least 15.0.1 or newer (without it Qt AI Assistant won't work)
- 确保Qt Creator能够使用Qt AI助手-检查版本,它需要至少是15.0.1或更高版本(没有它,Qt AI助手将无法工作)
- In Qt Creator's Extensions settings, ensure that "Use external repository" is checked
- 在Qt Creator的扩展设置中,确保选中“使用外部存储库”
- In Qt Creator, go to the Extensions view and select the AI Assistant
- 在Qt Creator中,转到扩展视图并选择AI助手
- Install and activate the Qt AI Assistant extension
- 安装并激活Qt AI助手扩展
- Accept Internet access and the installation of the AI Assistant Server
- 接受互联网接入和AI Assistant服务器的安装
- Select installation scope (individual user/ all users) and enable loading always at start-up
- 选择安装范围(单个用户/所有用户),并始终在启动时启用加载
- Connect to at least one Large Language Model by adding the token information in the Advanced settings tab
- 通过在“高级设置”选项卡中添加令牌信息,连接到至少一个大型语言模型
Note: You need to have a valid premium Qt Development license (Qt for AD Enterprise or better). If you don't have one, sign up for a Qt Development evaluation license. That will work, too.
注意:需要拥有有效的高级Qt开发许可证(Qt for AD Enterprise或更高版本)。如果没有Qt Development评估许可证,请注册。这也会奏效。
What’s Next for the Qt AI Assistant
Qt AI助手的下一步是什么
Obviously, there is still plenty to do and the Qt AI Assistant evolves rapidly.
显然,还有很多事情要做,Qt AI助手发展迅速。
There is a good amount of work left to optimize the capabilities of various LLMs that are currently marked as Experimental. We need to optimise the prompts for LLMs such as GPT-4o. We are the furthest with optimising the prompts for Claude 3.5 Sonnet and I always suggest to try out Sonnet for the best AI Assistance experience.
要优化目前标记为“实验”的各种LLM的功能,还有大量工作要做。我们需要优化GPT-4o等LLM的提示。我们在优化Claude 3.5 Sonnet的提示方面走得最远,我一直建议尝试Sonnet以获得最佳的AI辅助体验。
One of the big questions is the requirement for an in-built chat. Many of you will say that it would be a great feature to have and I agree. But it is a major development effort, both in terms of UI development and also back-end development (managing a memory of the conversation). It would come at a trade-off for other features such as a DiffView or collecting more context from other than the current file. There are great chat solutions for general-purpose content generation and we do not have much to add except the convenience factor of running within the IDE.
一个大问题是对内置聊天的要求。你们中的许多人会说,这将是一个很好的功能,我同意。但无论是UI开发还是后端开发(管理对话的内存),这都是一项重大的开发工作。它需要权衡其他功能,如DiffView或从当前文件以外的其他文件收集更多上下文。通用内容生成有很好的聊天解决方案,除了在IDE中运行的便利因素外,我们没有太多要添加的。
We Want Your Feedback
我们希望得到反馈
But most of all, we want to hear your feedback on what you are missing or what we should do differently. Launching the MVP as an experimental product allows us to consider your input and evolve rapidly while setting up the future backlog. Let us know what you want in the Qt AI Assistant in the comments below or through your Qt contact.
但最重要的是,我们希望听到对缺失的内容或我们应该采取的不同措施的反馈。将MVP作为实验产品推出,使我们能够考虑您的意见,并在设置未来的待办事项列表时快速发展。请在下面的评论中或通过您的Qt联系人告诉我们在Qt AI助手中想要什么。
More About The Product?
更多关于产品的信息?
Check out product pages.
查看产品页面。