feat: correct the captitalization (#33)

* feat: correct the captitalization

* workflow(ci): fix lint error

---------

Co-authored-by: zhouxiao.shaw <zhouxiao.shaw@bytedance.com>
This commit is contained in:
yuyutaotao 2024-08-05 11:03:32 +08:00 committed by GitHub
parent 7ee0bdd82a
commit 59081e33e7
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
27 changed files with 82 additions and 82 deletions

2
.gitignore vendored
View File

@ -96,7 +96,7 @@ playwright-report/
blob-report/
playwright/.cache/
# MidScene.js dump files
# Midscene.js dump files
__ai_responses__/

View File

@ -1,6 +1,6 @@
# MidScene Contribution Guide
# Midscene Contribution Guide
Thanks for that you are interested in contributing to MidScene. Before starting your contribution, please take a moment to read the following guidelines.
Thanks for that you are interested in contributing to Midscene. Before starting your contribution, please take a moment to read the following guidelines.
---
@ -130,7 +130,7 @@ npx nx test @midscene/web
### Run E2E Tests
MidScene uses [playwright](https://github.com/microsoft/playwright) to run end-to-end tests.
Midscene uses [playwright](https://github.com/microsoft/playwright) to run end-to-end tests.
You can run the `e2e` command to run E2E tests:
@ -193,7 +193,7 @@ feat(core): Add `myOption` config
## Versioning
All MidScene packages will use a fixed unified version.
All Midscene packages will use a fixed unified version.
The release notes are automatically generated by [GitHub releases](https://github.com/web-infra-dev/midscene/releases).

View File

@ -1,6 +1,6 @@
MIT License
Copyright (c) 2021-present MidScene.js
Copyright (c) 2021-present Midscene.js
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@ -1,9 +1,9 @@
<p align="center">
<img alt="MidScene.js" width="260" src="https://github.com/user-attachments/assets/bff5e76f-ea5c-42b7-bd12-0143a04671cf">
<img alt="Midscene.js" width="260" src="https://github.com/user-attachments/assets/bff5e76f-ea5c-42b7-bd12-0143a04671cf">
</p>
<h1 align="center">MidScene.js</h1>
<h1 align="center">Midscene.js</h1>
<div align="center">
English | [简体中文](./README.zh.md)
@ -20,11 +20,11 @@ English | [简体中文](./README.zh.md)
<img src="https://img.shields.io/badge/License-MIT-blue.svg?style=flat-square&color=00a8f0" alt="License" />
</p>
MidScene.js is an AI-powered automation SDK can control the page, perform assertions, and extract data in JSON format using natural language.
Midscene.js is an AI-powered automation SDK can control the page, perform assertions, and extract data in JSON format using natural language.
## Features ✨
- **Natural Language Interaction 👆**: Describe the steps and let MidScene plan and control the user interface for you
- **Natural Language Interaction 👆**: Describe the steps and let Midscene plan and control the user interface for you
- **Understand UI, Answer in JSON 🔍**: Provide prompts regarding the desired data format, and then receive the expected response in JSON format.
- **Intuitive Assertion 🤔**: Make assertions in natural language. Its all based on AI understanding.
- **Out-of-box LLM 🪓**: It is fine to use public multimodal LLMs like GPT-4o. There is no need for any custom training.
@ -40,4 +40,4 @@ MidScene.js is an AI-powered automation SDK can control the page, perform assert
## License
MidScene.js is [MIT licensed](https://github.com/web-infra-dev/midscene/blob/main/LICENSE).
Midscene.js is [MIT licensed](https://github.com/web-infra-dev/midscene/blob/main/LICENSE).

View File

@ -1,8 +1,8 @@
<p align="center">
<img alt="MidScene.js" width="260" src="https://github.com/user-attachments/assets/bff5e76f-ea5c-42b7-bd12-0143a04671cf">
<img alt="Midscene.js" width="260" src="https://github.com/user-attachments/assets/bff5e76f-ea5c-42b7-bd12-0143a04671cf">
</p>
<h1 align="center">MidScene.js</h1>
<h1 align="center">Midscene.js</h1>
<div align="center">
[English](./README.md) | 简体中文
@ -20,11 +20,11 @@
</p>
MidScene.js 是一个由 AI 驱动的自动化 SDK能够使用自然语言对网页进行操作、验证并提取 JSON 格式的数据。
Midscene.js 是一个由 AI 驱动的自动化 SDK能够使用自然语言对网页进行操作、验证并提取 JSON 格式的数据。
## 特性 ✨
- **自然语言互动 👆**只需描述你的步骤MidScene 会为你规划和操作用户界面
- **自然语言互动 👆**只需描述你的步骤Midscene 会为你规划和操作用户界面
- **理解UI、JSON格式回答 🔍**:你可以提出关于数据格式的要求,然后得到 JSON 格式的预期回应。
- **直观断言 🤔**用自然语言表达你的断言AI 会理解并处理。
- **开箱即用的LLM 🪓**:使用公开的多模态大语言模型( 如GPT-4o ),无需任何定制训练。
@ -40,4 +40,4 @@ MidScene.js 是一个由 AI 驱动的自动化 SDK能够使用自然语言对
## 授权许可
MidScene.js 遵循 [MIT 许可协议](https://github.com/web-infra-dev/midscene/blob/main/LICENSE)。
Midscene.js 遵循 [MIT 许可协议](https://github.com/web-infra-dev/midscene/blob/main/LICENSE)。

View File

@ -2,9 +2,9 @@
UI automation can be frustrating, often involving a maze of *#ids*, *data-test-xxx* attributes, and *.selectors* that are difficult to maintain, especially when the page undergoes a refactor.
Introducing MidScene.js, an innovative SDK designed to bring joy back to programming by simplifying automation tasks.
Introducing Midscene.js, an innovative SDK designed to bring joy back to programming by simplifying automation tasks.
MidScene.js leverages a multimodal Large Language Model (LLM) to intuitively “understand” your user interface and carry out the necessary actions. You can simply describe the interaction steps or expected data formats, and the AI will handle the execution for you.
Midscene.js leverages a multimodal Large Language Model (LLM) to intuitively “understand” your user interface and carry out the necessary actions. You can simply describe the interaction steps or expected data formats, and the AI will handle the execution for you.
## Features
@ -38,6 +38,6 @@ You may open the [Online Visualization Tool](/visualization/index.html) to see t
## Flow Chart
Here is a flowchart illustrating the core process of MidScene.
Here is a flowchart illustrating the core process of Midscene.
![](/flow.png)

View File

@ -126,7 +126,7 @@ Promise.resolve(
await page.goto("https://www.ebay.com");
await sleep(5000);
// 👀 init MidScene agent
// 👀 init Midscene agent
const mid = new PuppeteerAgent(page);
// 👀 type keywords, perform a search
@ -178,7 +178,7 @@ npx ts-node demo.ts
### Step 4. view test report after running
After running, MidScene will generate a log dump, which is placed in `./midscene_run/report/latest.web-dump.json` by default. Then put this file into [Visualization Tool](/visualization/), and you will have a clearer understanding of the process.
After running, Midscene will generate a log dump, which is placed in `./midscene_run/report/latest.web-dump.json` by default. Then put this file into [Visualization Tool](/visualization/), and you will have a clearer understanding of the process.
## View demo report

View File

@ -1,17 +1,17 @@
# FAQ
### Can MidScene smartly plan the actions according to my one-line goal? Like executing "Tweet 'hello world'"
### Can Midscene smartly plan the actions according to my one-line goal? Like executing "Tweet 'hello world'"
MidScene is an automation assistance SDK with a key feature of action stability — ensuring the same actions are performed in each run. To maintain this stability, we encourage you to provide detailed instructions to help the AI understand each step of your task.
Midscene is an automation assistance SDK with a key feature of action stability — ensuring the same actions are performed in each run. To maintain this stability, we encourage you to provide detailed instructions to help the AI understand each step of your task.
If you require a 'goal-to-task' AI planning tool, you can develop one based on MidScene.
If you require a 'goal-to-task' AI planning tool, you can develop one based on Midscene.
Related Docs:
* [Tips for Prompting](./prompting-tips.html)
### Limitations
There are some limitations with MidScene. We are still working on them.
There are some limitations with Midscene. We are still working on them.
1. The interaction types are limited to only tap, type, keyboard press, and scroll.
2. It's not 100% stable. Even GPT-4o can't return the right answer all the time. Following the [Prompting Tips](./prompting-tips) will help improve stability.
@ -19,11 +19,11 @@ There are some limitations with MidScene. We are still working on them.
### Which LLM should I choose ?
MidScene needs a multimodal Large Language Model (LLM) to understand the UI. Currently, we find that OpenAI's GPT-4o performs much better than others.
Midscene needs a multimodal Large Language Model (LLM) to understand the UI. Currently, we find that OpenAI's GPT-4o performs much better than others.
### About the token cost
Image resolution and element numbers (i.e., a UI context size created by MidScene) will affect the token bill.
Image resolution and element numbers (i.e., a UI context size created by Midscene) will affect the token bill.
Here are some typical data with GPT-4o.
@ -37,8 +37,8 @@ Here are some typical data with GPT-4o.
### The automation process is running more slowly than it did before
Since MidScene.js invokes AI for each planning and querying operation, the running time may increase by a factor of 3 to 10 compared to traditional Playwright scripts, for instance from 5 seconds to 20 seconds. This is currently inevitable but may improve with advancements in LLMs.
Since Midscene.js invokes AI for each planning and querying operation, the running time may increase by a factor of 3 to 10 compared to traditional Playwright scripts, for instance from 5 seconds to 20 seconds. This is currently inevitable but may improve with advancements in LLMs.
Despite the increased time and cost, MidScene stands out in practical applications due to its unique development experience and easy-to-maintain codebase. We are confident that incorporating automation scripts powered by MidScene will significantly enhance your projects efficiency, cover many more situations, and boost overall productivity.
Despite the increased time and cost, Midscene stands out in practical applications due to its unique development experience and easy-to-maintain codebase. We are confident that incorporating automation scripts powered by Midscene will significantly enhance your projects efficiency, cover many more situations, and boost overall productivity.
In short, it is worth the time and cost.

View File

@ -1,6 +1,6 @@
# Tips for Prompting
The natural language parameter passed to MidScene will be part of the prompt sent to the LLM. There are certain techniques in prompt engineering that can help improve the understanding of user interfaces.
The natural language parameter passed to Midscene will be part of the prompt sent to the LLM. There are certain techniques in prompt engineering that can help improve the understanding of user interfaces.
### The purpose of optimization is to get a stable response from AI
@ -28,7 +28,7 @@ Bad ❌: "[number, number], the [x, y] coords of the main button"
### Use visualization tool to debug
Use the visualization tool to debug and understand each step of MidScene. Just upload the log, and view the AI's parse results. You can find [the tool](/visualization/) on the navigation bar on this site.
Use the visualization tool to debug and understand each step of Midscene. Just upload the log, and view the AI's parse results. You can find [the tool](/visualization/) on the navigation bar on this site.
### Remember to cross-check the result by assertion

View File

@ -2,7 +2,7 @@
## config AI vendor
MidScene uses the OpenAI SDK as the default AI service. You can customize the configuration using environment variables.
Midscene uses the OpenAI SDK as the default AI service. You can customize the configuration using environment variables.
There are the main configs, in which `OPENAI_API_KEY` is required.
@ -50,7 +50,7 @@ You can view the integration sample in [quick-start](../getting-started/quick-st
### `.aiAction(steps: string)` or `.ai(steps: string)` - Control the page
You can use `.aiAction` to perform a series of actions. It accepts a `steps: string` as a parameter, which describes the actions. In the prompt, you should clearly describe the steps. MidScene will take care of the rest.
You can use `.aiAction` to perform a series of actions. It accepts a `steps: string` as a parameter, which describes the actions. In the prompt, you should clearly describe the steps. Midscene will take care of the rest.
`.ai` is the shortcut for `.aiAction`.
@ -66,18 +66,18 @@ await mid.ai('Click the "completed" status button below the task list');
Steps should always be clearly and thoroughly described. A very brief prompt like 'Tweet "Hello World"' will result in unstable performance and a high likelihood of failure.
Under the hood, MidScene will plan the detailed steps by sending your page context and a screenshot to the AI. After that, MidScene will execute the steps one by one. If MidScene deems it impossible to execute, an error will be thrown.
Under the hood, Midscene will plan the detailed steps by sending your page context and a screenshot to the AI. After that, Midscene will execute the steps one by one. If Midscene deems it impossible to execute, an error will be thrown.
The main capabilities of MidScene are as follows, and your task will be split into these types. You can see them in the visualization tools:
The main capabilities of Midscene are as follows, and your task will be split into these types. You can see them in the visualization tools:
1. **Locator**: Identify the target element using a natural language description
2. **Action**: Tap, scroll, keyboard input, hover
3. **Others**: Sleep
Currently, MidScene can't plan steps that include conditions and loops.
Currently, Midscene can't plan steps that include conditions and loops.
Related Docs:
* [FAQ: Can MidScene smartly plan the actions according to my one-line goal? Like executing "Tweet 'hello world'](../more/faq.html)
* [FAQ: Can Midscene smartly plan the actions according to my one-line goal? Like executing "Tweet 'hello world'](../more/faq.html)
* [Tips for Prompting](../more/prompting-tips.html)
### `.aiQuery(dataDemand: any)` - extract any data from page
@ -107,9 +107,9 @@ const dataC = await mid.aiQuery('{name: string, age: string}[], Data Record in t
### `.aiAssert(conditionPrompt: string, errorMsg?: string)` - do an assertion
This method will soon be available in MidScene.
This method will soon be available in Midscene.
`.aiAssert` works just like the normal `assert` method, except that the condition is a prompt string written in natural language. MidScene will call AI to determine if the `conditionPrompt` is true. If not, a detailed reason will be concatenated to the `errorMsg`.
`.aiAssert` works just like the normal `assert` method, except that the condition is a prompt string written in natural language. Midscene will call AI to determine if the `conditionPrompt` is true. If not, a detailed reason will be concatenated to the `errorMsg`.
```typescript
// coming soon
@ -132,7 +132,7 @@ export LANGCHAIN_API_KEY="your_key_here"
export LANGCHAIN_PROJECT="your_project_name_here"
```
Launch MidScene, you should see logs like this:
Launch Midscene, you should see logs like this:
```log
DEBUGGING MODE: langsmith wrapper enabled

View File

@ -2,7 +2,7 @@
pageType: home
hero:
name: MidScene.js
name: Midscene.js
text: |
Powered by AI
Joyful UI Automation
@ -16,10 +16,10 @@ hero:
link: /docs/getting-started/quick-start
image:
src: /midscene.png
alt: MidScene Logo
alt: Midscene Logo
features:
- title: Natural Language Interaction
details: Describe the steps and let MidScene plan and control the user interface for you
details: Describe the steps and let Midscene plan and control the user interface for you
icon: 👆
- title: Understand UI, Answer in JSON
details: Provide prompts regarding the desired data format, and then receive the expected response in JSON format.

View File

@ -2,9 +2,9 @@
UI 自动化太难写了。自动化脚本里到处都是选择器,比如 `#ids`、`data-test-xxx`、`.selectors`。在页面重构的时候,维护自动化脚本更将会是一场灾难。
我们在这里推出 MidScene.js助你重拾编码的乐趣。
我们在这里推出 Midscene.js助你重拾编码的乐趣。
MidScene.js 采用了多模态大语言模型LLM能够直观地“理解”你的用户界面并执行必要的操作。你只需描述交互步骤或期望的数据格式AI 就能为你完成任务。
Midscene.js 采用了多模态大语言模型LLM能够直观地“理解”你的用户界面并执行必要的操作。你只需描述交互步骤或期望的数据格式AI 就能为你完成任务。
# 特性
@ -48,6 +48,6 @@ const dataB = await agent.aiQuery('string[], 任务列表中的任务名');
## 流程图
下图展示了 MidScene 的核心流程。
下图展示了 Midscene 的核心流程。
![](/flow.png)

View File

@ -131,7 +131,7 @@ Promise.resolve(
await page.goto("https://www.ebay.com");
await sleep(5000);
// 👀 初始化 MidScene agent
// 👀 初始化 Midscene agent
const mid = new PuppeteerAgent(page);
// 👀 执行搜索
@ -185,7 +185,7 @@ npx ts-node demo.ts
### 第四步:查看运行报告
运行 MidScene 之后,系统会生成一个日志文件,默认存放在 `./midscene_run/report/latest.web-dump.json`。然后,你可以把这个文件导入 [可视化工具](/visualization/),这样你就能更清楚地了解整个过程。
运行 Midscene 之后,系统会生成一个日志文件,默认存放在 `./midscene_run/report/latest.web-dump.json`。然后,你可以把这个文件导入 [可视化工具](/visualization/),这样你就能更清楚地了解整个过程。
## 访问示例报告

View File

@ -1,17 +1,17 @@
# FAQ
### MidScene 能否根据一句话指令实现智能规划?比如执行 "发一条微博"
### Midscene 能否根据一句话指令实现智能规划?比如执行 "发一条微博"
MidScene 是一个辅助 UI 自动化的 SDK运行时稳定性很关键——即保证每次运行都能运行相同的动作。为了保持这种稳定性我们希望你提供详细的指令以帮助 AI 清晰地理解并执行。
Midscene 是一个辅助 UI 自动化的 SDK运行时稳定性很关键——即保证每次运行都能运行相同的动作。为了保持这种稳定性我们希望你提供详细的指令以帮助 AI 清晰地理解并执行。
如果你需要一个 '目标到任务' 的 AI 规划工具,不妨基于 MidScene 自行开发一个。
如果你需要一个 '目标到任务' 的 AI 规划工具,不妨基于 Midscene 自行开发一个。
关联文档:
* [编写提示词的技巧](./prompting-tips)
### 局限性
MidScene 存在一些局限性,我们仍在努力改进。
Midscene 存在一些局限性,我们仍在努力改进。
1. 交互类型有限:目前仅支持点击、输入、键盘和滚动操作。
2. 稳定性不足:即使是 GPT-4o 也无法确保 100% 返回正确答案。遵循 [编写提示词的技巧](./prompting-tips) 可以帮助提高 SDK 稳定性。
@ -19,7 +19,7 @@ MidScene 存在一些局限性,我们仍在努力改进。
### 关于 token 成本
图像分辨率和元素数量(即 MidScene 创建的 UI 上下文大小)会显著影响 token 消耗。
图像分辨率和元素数量(即 Midscene 创建的 UI 上下文大小)会显著影响 token 消耗。
以下是一些典型数据:
@ -33,8 +33,8 @@ MidScene 存在一些局限性,我们仍在努力改进。
### 脚本运行偏慢?
由于 MidScene.js 每次进行规划Planning和查询Query时都会调用 AI其运行耗时可能比传统 Playwright 用例增加 3 到 10 倍,比如从 5 秒变成 20秒。目前这一点仍无法避免。但随着大型语言模型LLM的进步未来性能可能会有所改善。
由于 Midscene.js 每次进行规划Planning和查询Query时都会调用 AI其运行耗时可能比传统 Playwright 用例增加 3 到 10 倍,比如从 5 秒变成 20秒。目前这一点仍无法避免。但随着大型语言模型LLM的进步未来性能可能会有所改善。
尽管运行时间较长MidScene 在实际应用中依然表现出色。它独特的开发体验会让代码库易于维护。我们相信,集成了 MidScene 的自动化脚本能够显著提升项目迭代效率,覆盖更多场景,提高整体生产力。
尽管运行时间较长Midscene 在实际应用中依然表现出色。它独特的开发体验会让代码库易于维护。我们相信,集成了 Midscene 的自动化脚本能够显著提升项目迭代效率,覆盖更多场景,提高整体生产力。
简而言之,虽然偏慢,但这些投入一定都是值得的。

View File

@ -1,6 +1,6 @@
# 编写提示词的技巧
你在 MidScene 编写的自然语言参数最终都会变成提示词Prompt发送给大语言模型。以下是一些可以帮助提升效果的提示词工程Prompt Engineering技巧。
你在 Midscene 编写的自然语言参数最终都会变成提示词Prompt发送给大语言模型。以下是一些可以帮助提升效果的提示词工程Prompt Engineering技巧。
## 目标是获得更稳定的响应
@ -28,7 +28,7 @@
### 使用可视化工具调试
使用可视化工具调试和理解 MidScene 的每个步骤。只需上传日志,就可以查看 AI 的解析结果。你可以在本站导航栏上找到 [可视化工具](/visualization/)。
使用可视化工具调试和理解 Midscene 的每个步骤。只需上传日志,就可以查看 AI 的解析结果。你可以在本站导航栏上找到 [可视化工具](/visualization/)。
### 通过断言交叉检查结果

View File

@ -2,7 +2,7 @@
## 配置 AI 服务商
MidScene 默认集成了 OpenAI SDK 调用 AI 服务,你也可以通过环境变量来自定义配置。
Midscene 默认集成了 OpenAI SDK 调用 AI 服务,你也可以通过环境变量来自定义配置。
主要配置项如下,其中 `OPENAI_API_KEY` 是必选项:
@ -50,7 +50,7 @@ const mid = new PuppeteerAgent(puppeteerPageInstance);
### `.aiAction(steps: string)``.ai(steps: string)` - 控制界面
你可以使用 `.aiAction` 来执行一系列操作。它接受一个参数 `steps: string` 用于描述这些操作。在这个参数中,你应该清楚地描述每一个步骤,然后 MidScene 会自动为你分析并执行。
你可以使用 `.aiAction` 来执行一系列操作。它接受一个参数 `steps: string` 用于描述这些操作。在这个参数中,你应该清楚地描述每一个步骤,然后 Midscene 会自动为你分析并执行。
`.ai``.aiAction` 的简写。
@ -66,7 +66,7 @@ await mid.ai('点击任务列表下方的 "completed" 状态按钮');
务必使用清晰、详细的步骤描述。使用非常简略的指令(如 “发一条微博” )会导致非常不稳定的执行结果或运行失败。
在底层MidScene 会将页面上下文和截图发送给 LLM以详细规划步骤。随后MidScene 会逐步执行这些步骤。如果 MidScene 认为无法执行,将抛出一个错误。
在底层Midscene 会将页面上下文和截图发送给 LLM以详细规划步骤。随后Midscene 会逐步执行这些步骤。如果 Midscene 认为无法执行,将抛出一个错误。
你的任务会被拆解成下述内置方法,你可以在可视化工具中看到它们:
@ -74,15 +74,15 @@ await mid.ai('点击任务列表下方的 "completed" 状态按钮');
2. **操作Action**点击、滚动、键盘输入、悬停hover
3. **其他**等待sleep
目前MidScene 无法规划包含条件和循环的步骤。
目前Midscene 无法规划包含条件和循环的步骤。
关联文档:
* [FAQ: MidScene 能否根据一句话指令实现智能操作?比如执行 "发一条微博"'](../more/faq.html)
* [FAQ: Midscene 能否根据一句话指令实现智能操作?比如执行 "发一条微博"'](../more/faq.html)
* [编写提示词的技巧](../more/prompting-tips.html)
### `.aiQuery(dataShape: any)` - 从页面提取数据
这个方法可以从 UI 提取自定义数据。它不仅能返回页面上直接书写的数据,还能基于“理解”返回数据(前提是多模态 AI 能够推理。返回值可以是任何合法的基本类型比如字符串、数字、JSON、数组等。你只需在 `dataDemand` 中描述它MidScene 就会给你满足格式的返回。
这个方法可以从 UI 提取自定义数据。它不仅能返回页面上直接书写的数据,还能基于“理解”返回数据(前提是多模态 AI 能够推理。返回值可以是任何合法的基本类型比如字符串、数字、JSON、数组等。你只需在 `dataDemand` 中描述它Midscene 就会给你满足格式的返回。
例如,从页面解析详细信息:
@ -107,7 +107,7 @@ const dataC = await mid.aiQuery('{name: string, age: string}[], 表格中的数
这个方法即将上线。
`.aiAssert` 的功能类似于一般的 `assert` 方法,但可以用自然语言编写条件参数 `conditionPrompt`。MidScene 会调用 AI 来判断条件是否为真。若满足条件,详细原因会附加到 `errorMsg` 中。
`.aiAssert` 的功能类似于一般的 `assert` 方法,但可以用自然语言编写条件参数 `conditionPrompt`。Midscene 会调用 AI 来判断条件是否为真。若满足条件,详细原因会附加到 `errorMsg` 中。
## 使用 LangSmith (可选)
@ -127,7 +127,7 @@ export LANGCHAIN_API_KEY="your_key_here"
export LANGCHAIN_PROJECT="your_project_name_here"
```
启动 MidScene 后,你应该会看到类似如下的日志:
启动 Midscene 后,你应该会看到类似如下的日志:
```log
DEBUGGING MODE: langsmith wrapper enabled

View File

@ -2,7 +2,7 @@
pageType: home
hero:
name: MidScene.js
name: Midscene.js
text: AI 加持,更愉悦的 UI 自动化
tagline:
actions:
@ -14,11 +14,11 @@ hero:
link: /docs/getting-started/quick-start
image:
src: /midscene.png
alt: MidScene Logo
alt: Midscene Logo
features:
- title: 自然语言互动
details: 只需描述你的步骤MidScene 会为你规划和操作用户界面
details: 只需描述你的步骤Midscene 会为你规划和操作用户界面
icon: 👆
- title: 理解UI、JSON格式回答
details: 你可以提出关于数据格式的要求,然后得到 JSON 格式的预期回应

View File

@ -3,7 +3,7 @@ import { defineConfig } from 'rspress/config';
export default defineConfig({
root: path.join(__dirname, 'docs'),
title: 'MidScene.js',
title: 'Midscene.js',
description: 'Your AI-Driven UI Compass',
icon: '/midscene-icon.png',
logo: {
@ -38,15 +38,15 @@ export default defineConfig({
lang: 'en',
// The label in nav bar to switch language
label: 'English',
title: 'MidScene.js',
description: 'MidScene.js',
title: 'Midscene.js',
description: 'Midscene.js',
},
{
lang: 'zh',
// The label in nav bar to switch language
label: '简体中文',
title: 'MidScene.js',
description: 'MidScene.js',
title: 'Midscene.js',
description: 'Midscene.js',
},
],
lang: 'en',

View File

@ -8,7 +8,7 @@
"e2e": "nx run-many --target=e2e --projects=@midscene/core,@midscene/visualizer,@midscene/web --verbose",
"prepare": "pnpm run build:pkg && simple-git-hooks",
"check-dependency-version": "check-dependency-version-consistency .",
"lint": "biome lint --changed --diagnostic-level=warn --fix --no-errors-on-unmatched",
"lint": "npx biome check . --diagnostic-level=warn --no-errors-on-unmatched --fix",
"format:ci": "pretty-quick --since HEAD~1",
"format": "pretty-quick --staged",
"commit": "cz"

View File

@ -1,6 +1,6 @@
MIT License
Copyright (c) 2024-present MidScene.js
Copyright (c) 2024-present Midscene.js
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@ -1,6 +1,6 @@
{
"name": "@midscene/core",
"description": "Hello, It's MidScene",
"description": "Hello, It's Midscene",
"version": "0.1.4",
"jsnext:source": "./src/index.ts",
"main": "./dist/lib/index.js",

View File

@ -84,7 +84,7 @@ export function writeDumpFile(opts: {
if (!gitIgnoreContent.includes(`${logDirName}/`)) {
writeFileSync(
gitIgnorePath,
`${gitIgnoreContent}\n# MidScene.js dump files\n${logDirName}/report\n${logDirName}/dump-logger\n`,
`${gitIgnoreContent}\n# Midscene.js dump files\n${logDirName}/report\n${logDirName}/dump-logger\n`,
'utf-8',
);
}

View File

@ -4,5 +4,5 @@ node_modules/
/blob-report/
/playwright/.cache/
# MidScene.js dump files
# Midscene.js dump files
midscene_run/

View File

@ -232,7 +232,7 @@ export function Visualizer(props: {
}}
>
<Helmet>
<title>MidScene.js - Visualization Tool</title>
<title>Midscene.js - Visualization Tool</title>
</Helmet>
<div
className="page-container"

View File

@ -1,4 +1,4 @@
# MidScene.js dump files
# Midscene.js dump files
midscene_run/report
midscene_run/dump

View File

@ -1,6 +1,6 @@
{
"name": "@midscene/web",
"description": "Web integration for MidScene.js",
"description": "Web integration for Midscene.js",
"version": "0.1.4",
"jsnext:source": "./src/index.ts",
"main": "./dist/lib/index.js",

View File

@ -58,7 +58,7 @@ class MidSceneReporter implements Reporter {
generateTestData(testDataList);
console.log(
'\x1b[32m%s\x1b[0m',
`MidScene report has been generated.\nRun "npx http-server ./midscene_run/report -p 9888 -o -s" to view.`,
`Midscene report has been generated.\nRun "npx http-server ./midscene_run/report -p 9888 -o -s" to view.`,
);
}
}