Compare commits

...

93 Commits

Author SHA1 Message Date
arch3rPro
e8822d711a feat(craft-agents): 升级 craft-agents 至 0.8.7 版本
更新版本至 0.8.7,包含新的 docker-compose 配置和数据文件
2026-04-14 23:21:57 +08:00
arch3rPro
365e7c710a feat(craft-agents): 升级 craft-agents 到 0.8.6 版本
删除 0.8.5 版本相关文件并添加 0.8.6 版本配置
更新 docker-compose 配置和国际化表单字段
2026-04-14 22:40:59 +08:00
arch3rPro
3a3123cf99 feat: update flowise 3.1.2 2026-04-14 22:32:49 +08:00
arch3rPro
3af85df816 feat: update cliproxyapi-plus 6.9.23-0 2026-04-14 22:32:12 +08:00
arch3rPro
01c003495d feat: update axonhub 0.9.32 2026-04-14 22:31:55 +08:00
arch3rPro
3df51564ef refactor(litellm): 更新应用版本从v1.83.0-nightly到v1.83.3-stable
迁移配置文件和数据文件到新版本目录
更新docker-compose配置以匹配稳定版本
2026-04-14 22:31:28 +08:00
arch3rPro
9f19223ab7 feat: update nocodb 2026.04.0 2026-04-14 05:33:06 +08:00
arch3rPro
e247555ec3 feat: update new-api 0.12.9 2026-04-14 05:32:43 +08:00
arch3rPro
ddb4652d65 feat: update new-api 0.12.9-allinone 2026-04-14 05:32:20 +08:00
arch3rPro
710a218489 feat: update n8n-zh 2.17.0 2026-04-14 05:32:08 +08:00
arch3rPro
f18bb659fd feat: update inspector 0.21.2 2026-04-14 05:31:34 +08:00
arch3rPro
300b4fd8da feat: update new-api 0.12.8 2026-04-13 05:32:25 +08:00
arch3rPro
b0b24b0adc feat: update new-api 0.12.8-allinone 2026-04-13 05:32:01 +08:00
arch3rPro
b3959314d1 feat: update blinko 1.8.7 2026-04-13 05:30:25 +08:00
arch3rPro
ec095eba5d fix(axonhub): 修正docker-compose中镜像标签格式
将镜像标签从0.9.31改为v0.9.31以保持版本标签一致性
2026-04-12 17:09:47 +08:00
arch3rPro
e44146ebf6 feat(axonhub): 添加axonhub应用配置及文档
添加axonhub应用的docker-compose配置、数据文件、logo和README文档
2026-04-12 17:07:04 +08:00
arch3rPro
a1a8e77b3a feat: update prompt-optimizer 2.9.3 2026-04-11 05:32:11 +08:00
arch3rPro
253b0d2005 feat(craft-agents): 升级应用版本至0.8.5并更新配置文件 2026-04-10 10:24:23 +08:00
arch3rPro
af89dd30a7 fix(craft-agents): 添加bun运行命令以允许不安全绑定
添加command配置以使用bun运行服务器并允许不安全绑定,确保服务正常启动
2026-04-10 10:21:07 +08:00
arch3rPro
6ad7b05160 feat(craft-agents): 添加craft-agents应用配置和文档
添加craft-agents应用的docker-compose配置、数据配置文件和README文档
2026-04-10 09:52:05 +08:00
arch3rPro
354fb12eba feat: update new-api 0.12.6 2026-04-10 05:31:34 +08:00
arch3rPro
a3be1dac39 feat: update new-api 0.12.6-allinone 2026-04-10 05:31:15 +08:00
arch3rPro
8e5b454640 feat: update new-api 0.12.5 2026-04-09 05:31:46 +08:00
arch3rPro
8c0e79a720 feat: update new-api 0.12.5-allinone 2026-04-09 05:31:24 +08:00
arch3rPro
d31e124a8e feat: update prompt-optimizer 2.9.2 2026-04-08 05:32:55 +08:00
arch3rPro
4df70ffc78 feat: update nzbget 26.1 2026-04-08 05:32:38 +08:00
arch3rPro
464e3656de feat: update new-api 0.12.3 2026-04-08 05:32:07 +08:00
arch3rPro
f5b5428851 feat: update new-api 0.12.3-allinone 2026-04-08 05:31:49 +08:00
arch3rPro
21cd2fe790 feat: update n8n-zh 2.16.0 2026-04-08 05:31:39 +08:00
arch3rPro
bee54bcdd4 feat: update gpt4free 7.4.7-slim 2026-04-08 05:31:08 +08:00
arch3rPro
3ef319c34b feat: update gpt4free 7.4.7 2026-04-08 05:30:58 +08:00
arch3rPro
2b21339f76 feat: update easytier 2.6.0 2026-04-08 05:30:37 +08:00
arch3rPro
1e8e28b50e feat: update prompt-optimizer 2.9.1 2026-04-07 05:33:01 +08:00
arch3rPro
50cfabba9a feat: update nezha 2.0.7 2026-04-07 05:32:25 +08:00
arch3rPro
a31df60b8c feat: update new-api 0.12.2 2026-04-07 05:31:51 +08:00
arch3rPro
269398241d feat: update new-api 0.12.2-allinone 2026-04-07 05:31:32 +08:00
arch3rPro
03633cdb8e feat: update gpt4free 7.4.3-slim 2026-04-07 05:30:55 +08:00
arch3rPro
0cd611e796 feat: update gpt4free 7.4.3 2026-04-07 05:30:45 +08:00
arch3rPro
b9eddadda4 feat: update qexo 4.1.1 2026-04-06 05:32:14 +08:00
arch3rPro
6fe13bd7bc feat: update docmost 0.71.1 2026-04-06 05:30:42 +08:00
arch3rPro
58593348cf feat: update beszel-agent 0.18.7 2026-04-06 05:30:17 +08:00
arch3rPro
a270dafeeb feat: update gpt4free 7.3.9-slim 2026-04-05 05:30:54 +08:00
arch3rPro
e8f5b20093 feat: update gpt4free 7.3.9 2026-04-05 05:30:44 +08:00
arch3rPro
cc3f716151 feat: update gpt4free 7.3.7-slim 2026-04-04 05:30:56 +08:00
arch3rPro
3ab8d1b61d feat: update gpt4free 7.3.7 2026-04-04 05:30:47 +08:00
arch3rPro
74b14d5feb feat(litellm): 添加v1.83.0-nightly版本并更新配置
更新latest版本的docker-compose.yml和data.yml配置,添加v1.83.0-nightly版本的配置文件
将LITELLM_MASTER_KEY字段类型改为password并更新默认值
2026-04-03 15:07:11 +08:00
arch3rPro
20ea51d3ec feat: 添加 LiteLLM 应用配置文件和部署文件
添加 LiteLLM 的配置文件、部署文件和文档,包括:
- Prometheus 监控配置
- Docker Compose 部署文件
- 应用元数据配置
- README 文档
2026-04-03 15:06:34 +08:00
arch3rPro
21b4089535 feat: update prompt-optimizer 2.8.0 2026-04-03 05:32:04 +08:00
arch3rPro
859940d036 feat: update new-api 0.12.1 2026-04-03 05:31:29 +08:00
arch3rPro
f3eae2feac feat: update new-api 0.12.1-allinone 2026-04-03 05:31:12 +08:00
arch3rPro
47fa6c4bca feat: update langflow 1.8.4 2026-04-03 05:30:55 +08:00
arch3rPro
2790ace79f docs: 在README中添加Sub2API、CLIProxyAPI Plus和Trae-Proxy项目介绍 2026-04-03 01:37:28 +08:00
arch3rPro
468cceabd9 refactor(sub2api): 重构配置管理和版本结构
移除硬编码的配置文件,改为使用环境变量
添加 0.1.106 稳定版本目录结构
更新 README 文档说明自动生成密码功能
2026-04-03 01:28:09 +08:00
arch3rPro
4259135298 feat(cliproxyapi-plus): 升级至v6.9.9-0并更新文档
- 添加6.9.9-0版本的docker-compose.yml和data.yml
- 删除6.9.5-0版本的旧配置文件
- 更新README文档,增加详细配置说明
- 修改端口变量名使其更具描述性
- 更新文档链接至最新帮助中心
2026-04-02 22:50:35 +08:00
arch3rPro
dc57fd7270 feat: update new-api 0.12.0 2026-04-02 05:31:39 +08:00
arch3rPro
a8d61e4d2c feat: update new-api 0.12.0-allinone 2026-04-02 05:31:18 +08:00
arch3rPro
447ceb0a01 update logo.png 2026-04-01 23:00:39 +08:00
arch3rPro
bee2a5a0fc docs(tailscale-derp): 完善登录指南并更新docker-compose配置
1. 在README中细化tailscale容器登录步骤,增加状态验证说明
2. 更新docker-compose.yml,添加TS_USERSPACE环境变量和userspace-networking命令
3. 为DERP_VERIFY_CLIENTS设置默认值true
2026-04-01 21:15:30 +08:00
arch3rPro
a81c787a0b feat: update skills 2026-04-01 18:41:10 +08:00
arch3rPro
6013a7944e feat: Add app cliproxyapi and tailscale-Derp 2026-04-01 18:38:58 +08:00
arch3rPro
e27d8ac57e feat: update new-api 0.11.9 2026-04-01 05:32:39 +08:00
arch3rPro
bd56602dd3 feat: update new-api 0.11.9-allinone 2026-04-01 05:32:14 +08:00
arch3rPro
18bd2aaf75 feat: update gpt4free 7.3.5-slim 2026-04-01 05:31:28 +08:00
arch3rPro
80f8395411 feat: update gpt4free 7.3.5 2026-04-01 05:31:15 +08:00
arch3rPro
a89b8a9dde feat: update docmost 0.71.0 2026-04-01 05:30:52 +08:00
arch3rPro
5cf95e6c05 feat: update n8n-zh 2.15.0 2026-03-31 00:56:16 +08:00
arch3rPro
ada91aa913 feat: update gpt-load 1.4.6 2026-03-30 05:32:23 +08:00
arch3rPro
4cb5055487 feat: update beszel-agent 0.18.6 2026-03-30 05:30:20 +08:00
arch3rPro
b3d2200e91 feat: update tianji 1.31.20 2026-03-28 05:33:15 +08:00
arch3rPro
5c140cba11 feat: update langflow 1.8.3 2026-03-28 05:31:31 +08:00
arch3rPro
3714fb54d1 feat: update dify 1.13.3 2026-03-28 05:30:45 +08:00
arch3rPro
5a1dc0de21 feat: update beszel-agent 0.18.5 2026-03-28 05:30:22 +08:00
arch3rPro
d94b6f184f feat: update tianji 1.31.19 2026-03-27 05:34:44 +08:00
arch3rPro
6f1bf8f39f feat: update n8n-zh 2.14.2 2026-03-27 05:33:21 +08:00
arch3rPro
563b508b0b feat: update linkwarden 2.14.0 2026-03-27 05:33:01 +08:00
arch3rPro
d156f613e4 feat: update langflow 1.8.2 2026-03-27 05:31:18 +08:00
arch3rPro
476522b371 feat: update dify 1.13.3 2026-03-27 05:30:31 +08:00
arch3rPro
d73c558425 update README.mda 2026-03-26 10:20:05 +08:00
arch3rPro
f3224d8a51 Fix: remove LiteLLM 2026-03-26 10:19:36 +08:00
arch3rPro
0a29059adc feat: update prompt-optimizer 2.7.0 2026-03-26 05:33:21 +08:00
arch3rPro
987951d074 feat: update piclist 2.3.5 2026-03-26 05:33:07 +08:00
arch3rPro
1319f362cb feat: update n8n-zh 2.14.1 2026-03-26 05:32:01 +08:00
arch3rPro
a4774272fe feat: update dify 1.13.3 2026-03-26 05:30:42 +08:00
arch3rPro
40684cf6fb feat: update tianji 1.31.18 2026-03-25 05:34:26 +08:00
arch3rPro
2ca7412fff feat: update n8n-zh 2.14.0 2026-03-25 05:32:22 +08:00
arch3rPro
1632a44057 feat: update README.md 2026-03-24 19:09:05 +08:00
arch3rPro
1bf528acc8 feat: add AI-powered 1Panel app builder skill
- Add skill configuration for generating 1Panel app configs via AI
- Include templates for data.yml and docker-compose.yml
- Add utility scripts for app generation, icon download, and validation
- Provide reference examples and usage documentation
- Update .gitignore to exclude .trae directory
- Update README.md with skill usage instructions
2026-03-24 19:00:47 +08:00
arch3rPro
69ad9e1a76 feat: add Sub2API application for AI API gateway platform
- Add Sub2API 1Panel application configuration
- Support subscription quota distribution, API Key management, billing and load balancing
- Include docker-compose.yml, data.yml, README documentation and logo
- Support amd64 and arm64 architectures
2026-03-24 19:00:29 +08:00
arch3rPro
e1104e4dcf feat: update LiteLLM v1.82.6-nightly 2026-03-24 10:01:37 +08:00
arch3rPro
833e3cbfd8 feat: update Searxng 2026.3.23-2c1ce3bd3 2026-03-23 22:22:14 +08:00
arch3rPro
eee6a147ea feat: update docker-compose.yml 2026-03-23 22:15:34 +08:00
arch3rPro
6d4fdcceb5 feat: update prompt-optimizer 2.6.3 2026-03-23 21:32:12 +08:00
arch3rPro
9a1522fdb4 feat: update flowise 3.1.1 2026-03-23 21:24:13 +08:00
179 changed files with 5263 additions and 193 deletions

2
.gitignore vendored
View File

@@ -10,3 +10,5 @@
# Update
/update
# Skills
.trae

219
README.md
View File

@@ -18,16 +18,88 @@
</p>
### 📖 仓库介绍
- 本仓库包含多个适用于 1Panel 的应用,旨在为用户提供简单、快速的安装与更新体验。应用均为开源项目,支持通过 1Panel 的计划任务功能自动化安装和更新。通过仓库提供的脚本,可以轻松地将应用集成到 1Panel 系统中。
- 仓库主打优质应用合集,不追求大而全(很多基本用不上的应用会干扰检索查看)有推荐的应用可以在issue中进行提交
- 仓库包含多个适用于 1Panel 的应用,旨在为用户提供简单、快速的安装与更新体验。应用均为开源项目,支持通过 1Panel 的计划任务功能自动化安装和更新。通过仓库提供的脚本,可以轻松地将应用集成到 1Panel 系统中。
- 仓库主打优质应用合集,不追求大而全(很多基本用不上的应用会干扰检索查看)有推荐的应用可以在issue中进行提交
### ⚠️ 仓库申明
- 非官方,第三方应用商店
- 不对任何原始镜像的有效性做出任何明示或暗示的保证或声明,安全性和风险自查
- 个人仓库可以Fork后自行更新但是严禁未经授权私自删除个人信息后合并发布
- 个人仓库可以Fork后自行更新但是严禁未经授权私自删除个人信息后合并发布
### 🚀 使用方法
#### 📋 添加脚本到 1Panel 计划任务
1. 在 1Panel 控制面板中,进入"计划任务"页面。
2. 点击"新增任务",选择任务类型为"Shell 脚本"。
3. 在脚本框中粘贴以下代码:
```bash
#!/bin/bash
# 清理旧的临时目录
rm -rf /tmp/appstore_merge
# 克隆 appstore-arch3rPro
git clone --depth=1 https://ghfast.top/https://github.com/arch3rPro/1Panel-Appstore /tmp/appstore_merge/appstore-arch3rPro
# 复制 数据(完整复制)
cp -rf /tmp/appstore_merge/appstore-arch3rPro/apps/* /opt/1panel/resource/apps/local/
# 清理临时目录
rm -rf /tmp/appstore_merge
echo "应用商店数据已更新"
```
### 🤖 使用 AI 快速生成应用配置
本仓库提供了 Skill 配置,支持在 Cursor、Windsurf、Claude Code 等 AI 客户端中快速生成 1Panel 应用配置。
#### 📁 Skills 目录结构
```
skills/
├── SKILL.md # 1Panel App Builder 技能定义
├── README.md # 使用文档
├── templates/ # 配置模板
│ ├── data.yml.tpl # 应用元数据模板
│ └── docker-compose.yml.tpl # 编排文件模板
├── scripts/ # 工具脚本
│ ├── generate-app.sh # 主生成脚本
│ ├── download-icon.sh # 图标下载工具
│ └── validate-app.sh # 配置验证工具
├── references/ # 参考示例
│ └── 1panel-examples.md
└── examples/ # 使用示例
└── example-usage.md
```
#### 💡 使用示例
只需向 AI 提供以下任一信息,即可自动生成完整的应用配置:
```
# GitHub 项目
添加应用 AList https://github.com/alist-org/alist
# docker-compose 文件
根据这个 docker-compose.yml 生成 1Panel 应用配置
# docker run 命令
将这个 docker run 命令转换为 1Panel 应用:
docker run -d --name=nginx -p 80:80 nginx:latest
```
#### ✨ AI 生成的配置包含
- `data.yml` - 应用元数据(顶层)
- `version/data.yml` - 参数定义(表单字段)
- `docker-compose.yml` - Docker 编排文件
- `README.md` - 中文文档
- `README_en.md` - 英文文档
- `logo.png` - 应用图标
### 📱 应用列表
@@ -36,13 +108,9 @@
#### 🤖LLM免费API接口
- 支持一键部署AI免费API接口使用方式请参考应用内**README介绍**
- **Free-API系列应用已下架原项目由于供应链投毒被植入恶意代码请及时停止运行并删除这些服务**
- 经过几天的排查和重构,已重新上架[GLM-Free-API](https://github.com/xiaoY233/GLM-Free-API)、[MiniMax-Free-API](https://github.com/xiaoY233/MiniMax-Free-API)、[Qwen-Free-API](https://github.com/xiaoY233/Qwen-Free-API)、[Kimi-Free-API](https://github.com/xiaoY233/Kimi-Free-API)[DeepSeek-Free-API](https://github.com/xiaoY233/DeepSeek-Free-API),欢迎各位对源码进行审查,如果不放心,建议还是暂停使用!
- 其他的Free-API系列看情况再搞了,后续主要更新上述几个Free-API兼容Gemini-cli和Claude的API接入。
- LiteLLM应用原项目又出现了供应链投毒事件**请及时停止运行并删除该服务友情提示为了避免泄漏的API密钥被滥用请从你的AI供应商中删除或禁止使用相关密钥**
<table>
<tr>
@@ -103,8 +171,8 @@
<!-- <a href="./apps/jimeng-free-api/README.md">
<img src="./apps/jimeng-free-api/logo.png" width="60" height="60" alt="Jimeng-Free-API"> -->
<br><b>Jimeng-Free-API</b>
</a>
<b>Jimeng-Free-API</b> </a>
🚀 即梦3.0逆向API【特长图像生成顶流】
@@ -115,8 +183,8 @@
<!-- <a href="./apps/spark-free-api/README.md">
<img src="./apps/spark-free-api/logo.png" width="60" height="60" alt="Spark-Free-API"> -->
<br><b>Spark-Free-API</b>
</a>
<b>Spark-Free-API</b> </a>
🚀 讯飞星火大模型逆向API【特长办公助手】
@@ -144,8 +212,8 @@
<!-- <a href="./apps/step-free-api/README.md">
<img src="./apps/step-free-api/logo.png" width="60" height="60" alt="Step-Free-API"> -->
<br><b>Step-Free-API</b>
</a>
<b>Step-Free-API</b> </a>
🚀 阶跃星辰跃问Step 多模态大模型逆向API【特长超强多模态】
@@ -156,8 +224,8 @@
<!-- <a href="./apps/metaso-free-api/README.md">
<img src="./apps/metaso-free-api/logo.png" width="60" height="60" alt="Metaso-Free-API"> -->
<br><b>Metaso-Free-API</b>
</a>
<b>Metaso-Free-API</b> </a>
🚀 秘塔AI搜索逆向API【特长超强检索超长输出】
@@ -178,7 +246,7 @@
🚀 免费的GPT-4和其他大语言模型API接口
<kbd>7.3.4-slim</kbd> • [官网链接](https://github.com/xtekky/gpt4free)
<kbd>7.4.7-slim</kbd> • [官网链接](https://github.com/xtekky/gpt4free)
</td>
<td width="33%" align="center">
@@ -208,7 +276,6 @@
</tr>
</table>
#### 📝 文档与内容管理
<table>
@@ -222,7 +289,7 @@
轻量级文档管理系统,支持多人协作编辑与版本控制
<kbd>0.70.3</kbd> • [官网链接](https://github.com/docmost/docmost)
<kbd>0.71.1</kbd> • [官网链接](https://github.com/docmost/docmost)
</td>
<td width="33%" align="center">
@@ -246,7 +313,7 @@
美观强大的在线静态博客管理器,支持多种平台
<kbd>4.0.1</kbd> • [官网链接](https://github.com/Qexo/Qexo)
<kbd>4.1.1</kbd> • [官网链接](https://github.com/Qexo/Qexo)
</td>
</tr>
@@ -287,7 +354,7 @@
自托管协作书签管理工具,支持网页归档和团队协作
<kbd>2.13.5</kbd> • [官网链接](https://github.com/linkwarden/linkwarden)
<kbd>2.14.0</kbd> • [官网链接](https://github.com/linkwarden/linkwarden)
</td>
</tr>
@@ -316,7 +383,7 @@
开源自托管个人笔记工具支持AI增强笔记检索
<kbd>1.8.6</kbd> • [官网链接](https://github.com/blinko-space/blinko)
<kbd>1.8.7</kbd> • [官网链接](https://github.com/blinko-space/blinko)
</td>
<td width="33%" align="center">
@@ -357,7 +424,7 @@ AI驱动的开源代码知识库与文档协作平台支持多模型、多数
开源Airtable替代品将任何数据库转换为智能电子表格
<kbd>0.301.5</kbd> • [官网链接](https://github.com/nocodb/nocodb)
<kbd>2026.04.0</kbd> • [官网链接](https://github.com/nocodb/nocodb)
</td>
<td width="33%" align="center">
@@ -388,7 +455,7 @@ AI驱动的开源代码知识库与文档协作平台支持多模型、多数
🌐 简单安全去中心化的内网穿透 VPN 组网方案
<kbd>2.5.0</kbd> • [官网链接](https://github.com/EasyTier/Easytier)
<kbd>2.6.0</kbd> • [官网链接](https://github.com/EasyTier/Easytier)
</td>
<td width="33%" align="center">
@@ -486,7 +553,7 @@ AI驱动的开源代码知识库与文档协作平台支持多模型、多数
🤖 开源LLM应用开发平台支持AI工作流和RAG管道
<kbd>1.13.2</kbd> • [官网链接](https://github.com/langgenius/dify)
<kbd>1.13.3</kbd> • [官网链接](https://github.com/langgenius/dify)
</td>
<td width="33%" align="center">
@@ -498,7 +565,7 @@ AI驱动的开源代码知识库与文档协作平台支持多模型、多数
🚀 强大的AI提示词优化工具支持多种主流大语言模型
<kbd>2.6.2</kbd> • [官网链接](https://github.com/arch3rPro/Prompt-Optimizer)
<kbd>2.9.3</kbd> • [官网链接](https://github.com/arch3rPro/Prompt-Optimizer)
</td>
</tr>
@@ -515,7 +582,7 @@ AI驱动的开源代码知识库与文档协作平台支持多模型、多数
🍥 新一代大模型网关与AI资产管理系统支持多种模型统一调用
<kbd>0.11.8</kbd> • [官网链接](https://docs.newapi.pro/)
<kbd>0.12.9</kbd> • [官网链接](https://docs.newapi.pro/)
</td>
<td width="33%" align="center">
@@ -539,7 +606,7 @@ AI驱动的开源代码知识库与文档协作平台支持多模型、多数
🚀 智能密钥轮询的多渠道AI代理高性能企业级AI接口透明代理服务
<kbd>1.4.4</kbd> • [官网链接](https://github.com/tbphp/gpt-load)
<kbd>1.4.6</kbd> • [官网链接](https://github.com/tbphp/gpt-load)
</td>
</tr>
@@ -556,7 +623,7 @@ AI驱动的开源代码知识库与文档协作平台支持多模型、多数
🔮 开源可视化AI工作流构建平台拖拽式设计LLM应用
<kbd>3.1.0</kbd> • [官网链接](https://github.com/FlowiseAI/Flowise)
<kbd>3.1.2</kbd> • [官网链接](https://github.com/FlowiseAI/Flowise)
</td>
<td width="33%" align="center">
@@ -568,7 +635,7 @@ AI驱动的开源代码知识库与文档协作平台支持多模型、多数
🔍 模型上下文协议调试工具支持MCP服务器测试与开发
<kbd>0.21.1</kbd> • [官网链接](https://github.com/modelcontextprotocol/inspector)
<kbd>0.21.2</kbd> • [官网链接](https://github.com/modelcontextprotocol/inspector)
</td>
<td width="33%" align="center">
@@ -638,19 +705,19 @@ AI驱动的开源代码知识库与文档协作平台支持多模型、多数
🔮 强大的AI应用构建平台可视化设计AI驱动的工作流和代理
<kbd>1.8.1</kbd> • [官网链接](https://langflow.org/)
<kbd>1.8.4</kbd> • [官网链接](https://langflow.org/)
</td>
<td width="33%" align="center">
<a href="./apps/litellm/README.md">
<img src="./apps/litellm/logo.png" width="60" height="60" alt="LiteLLM">
<!-- <a href="">
<img src="./apps/litellm/logo.png" width="60" height="60" alt="LiteLLM"> -->
<br><b>LiteLLM</b>
</a>
🔧 使用OpenAI格式统一调用所有LLM API支持多种云服务商
<kbd>latest</kbd> • [官网链接](https://github.com/BerriAI/litellm)
<kbd>已下架</kbd> • [官网链接](https://github.com/BerriAI/litellm)
</td>
<td width="33%" align="center">
@@ -662,7 +729,7 @@ AI驱动的开源代码知识库与文档协作平台支持多模型、多数
🔄 n8n汉化版具有原生AI能力的Fair-code工作流自动化平台
<kbd>2.13.2</kbd> • [官网链接](https://n8n.io/)
<kbd>2.17.0</kbd> • [官网链接](https://n8n.io/)
</td>
</tr>
@@ -697,6 +764,47 @@ AI驱动的开源代码知识库与文档协作平台支持多模型、多数
</tr>
</table>
<table>
<tr>
<td width="33%" align="center">
<a href="./apps/sub2api/README.md">
<img src="./apps/sub2api/logo.png" width="60" height="60" alt="Sub2API">
<br><b>Sub2API</b>
</a>
🍥 AI API 网关平台支持订阅配额分发、API Key 管理、计费和负载均衡
<kbd>0.1.106</kbd> • [官网链接](https://sub2api.org)
</td>
<td width="33%" align="center">
<a href="./apps/cliproxyapi-plus/README.md">
<img src="./apps/cliproxyapi-plus/logo.png" width="60" height="60" alt="CLIProxyAPI Plus">
<br><b>CLIProxyAPI Plus</b>
</a>
🔗 CLIProxyAPI Plus 代理API服务
<kbd>6.9.23-0</kbd> • [官网链接](https://github.com/router-for-me/CLIProxyAPIPlus)
</td>
<td width="33%" align="center">
<a href="./apps/trae-proxy/README.md">
<img src="./apps/trae-proxy/logo.png" width="60" height="60" alt="Trae-Proxy">
<br><b>Trae-Proxy</b>
</a>
🎯 一个智能的API代理工具专门用于拦截和重定向OpenAI API请求到自定义后端服务
<kbd>1.0.0</kbd> • [官网链接](https://github.com/arch3rPro/Trae-Proxy)
</td>
</tr>
</table>
#### 🎵 多媒体管理
<table>
@@ -734,7 +842,7 @@ AI驱动的开源代码知识库与文档协作平台支持多模型、多数
🖼️ 高效云存储和图床平台管理工具
<kbd>2.3.3</kbd> • [官网链接](https://github.com/Kuingsmile/PicList)
<kbd>2.3.5</kbd> • [官网链接](https://github.com/Kuingsmile/PicList)
</td>
</tr>
@@ -751,7 +859,7 @@ AI驱动的开源代码知识库与文档协作平台支持多模型、多数
📥 高性能Usenet下载工具支持Web界面管理
<kbd>26.0</kbd> • [官网链接](https://nzbget.net/)
<kbd>26.1</kbd> • [官网链接](https://nzbget.net/)
</td>
<td width="33%" align="center">
@@ -794,7 +902,7 @@ AI驱动的开源代码知识库与文档协作平台支持多模型、多数
📊 开源轻量易用的服务器监控运维工具
<kbd>2.0.6</kbd> • [官网链接](https://github.com/naiba/nezha/)
<kbd>2.0.7</kbd> • [官网链接](https://github.com/naiba/nezha/)
</td>
<td width="33%" align="center">
@@ -876,7 +984,7 @@ AI驱动的开源代码知识库与文档协作平台支持多模型、多数
📊 开源 all-in-one 数据洞察中心,集成网站分析、服务监控、服务器状态监控
<kbd>1.31.17</kbd> • [官网链接](https://tianji.msgbyte.com/)
<kbd>1.31.20</kbd> • [官网链接](https://tianji.msgbyte.com/)
</td>
<td width="33%" align="center">
@@ -888,7 +996,7 @@ AI驱动的开源代码知识库与文档协作平台支持多模型、多数
⚡ 轻量级服务器监控代理,支持实时性能数据收集
<kbd>0.18.4</kbd> • [官网链接](https://github.com/henrygd/beszel)
<kbd>0.18.7</kbd> • [官网链接](https://github.com/henrygd/beszel)
</td>
<td width="33%" align="center">
@@ -1197,33 +1305,6 @@ AI驱动的开源代码知识库与文档协作平台支持多模型、多数
</tr>
</table>
### 🚀 使用方法
#### 📋 添加脚本到 1Panel 计划任务
1. 在 1Panel 控制面板中,进入"计划任务"页面。
2. 点击"新增任务",选择任务类型为"Shell 脚本"。
3. 在脚本框中粘贴以下代码:
```bash
#!/bin/bash
# 清理旧的临时目录
rm -rf /tmp/appstore_merge
# 克隆 appstore-arch3rPro
git clone --depth=1 https://ghfast.top/https://github.com/arch3rPro/1Panel-Appstore /tmp/appstore_merge/appstore-arch3rPro
# 复制 数据(完整复制)
cp -rf /tmp/appstore_merge/appstore-arch3rPro/apps/* /opt/1panel/resource/apps/local/
# 清理临时目录
rm -rf /tmp/appstore_merge
echo "应用商店数据已更新"
```
<!-- 橙色风格 -->
![Copyright-arch3rPro](https://img.shields.io/badge/Copyright-arch3rPro-ff9800?style=flat&logo=github&logoColor=white)
!\[Copyright-arch3rPro]\(<https://img.shields.io/badge/Copyright-arch3rPro-ff9800?style=flat&logo=github&logoColor=white> null)

View File

@@ -0,0 +1,28 @@
additionalProperties:
formFields:
- default: 8090
edit: true
envKey: PANEL_APP_PORT_HTTP
labelEn: Web Port
labelZh: Web端口
required: true
rule: paramPort
type: number
label:
en: Web Port
zh: Web端口
ja: Webポート
ko: Web 포트
- default: ""
edit: true
envKey: AXONHUB_DB_PASSWORD
labelEn: Database Password
labelZh: 数据库密码
required: false
rule: paramComplexity
type: password
label:
en: Database Password
zh: 数据库密码
ja: データベースパスワード
ko: 데이터베이스 비밀번호

View File

@@ -0,0 +1,20 @@
services:
axonhub:
image: looplj/axonhub:v0.9.32
container_name: ${CONTAINER_NAME}
restart: always
networks:
- 1panel-network
ports:
- ${PANEL_APP_PORT_HTTP}:8090
volumes:
- ./data:/data
environment:
- TZ=Asia/Shanghai
- AXONHUB_DB_DIALECT=sqlite3
- AXONHUB_DB_DSN=file:/data/axonhub.db?cache=shared&_fk=1&pragma=journal_mode(WAL)
labels:
createdBy: Apps
networks:
1panel-network:
external: true

79
apps/axonhub/README.md Normal file
View File

@@ -0,0 +1,79 @@
# AxonHub
一站式AI开发平台 - 统一API网关支持多种LLM提供商。
## 功能特点
- 🔄 **任意SDK调用任意模型** - 使用OpenAI SDK调用Claude或使用Anthropic SDK调用GPT零代码修改
- 🔍 **完整请求追踪** - 线程感知的可观测性,完整的请求时间线,快速调试
- 🔐 **企业级RBAC** - 细粒度访问控制、使用配额和数据隔离
-**智能负载均衡** - <100ms自动故障转移始终路由到最健康的通道
- 💰 **实时成本追踪** - 每请求成本分解,输入、输出、缓存令牌全追踪
## 支持的LLM提供商
- OpenAI (GPT-4, GPT-4o, GPT-5等)
- Anthropic (Claude 3.5, Claude 3.0等)
- Zhipu AI (GLM-4.5, GLM-4.5-air等)
- Moonshot AI/Kimi (kimi-k2等)
- DeepSeek (DeepSeek-V3.1等)
- ByteDance Doubao (doubao-1.6等)
- Gemini (Gemini 2.5等)
- Fireworks (MiniMax-M2.5, GLM-5, Kimi K2.5等)
- Jina AI (Embeddings, Reranker等)
- OpenRouter (多种模型)
- ZAI (图像生成)
- AWS Bedrock (Claude on AWS)
- Google Cloud (Claude on GCP)
- NanoGPT (多种模型, 图像生成)
## 使用说明
### 首次访问
1. 访问 `http://<服务器IP>:8090`
2. 按照设置向导创建管理员账户密码至少6位
3. 登录后配置AI提供商的API密钥
4. 创建API密钥开始使用
### 默认端口
- Web界面: 8090
### 数据库配置
默认使用SQLite数据库数据存储在 `./data` 目录。
如需使用PostgreSQL或MySQL请参考官方文档配置环境变量
- `AXONHUB_DB_DIALECT`: 数据库类型 (postgres/mysql/sqlite3)
- `AXONHUB_DB_DSN`: 数据库连接字符串
### 数据目录
应用数据存储在 `./data` 目录。
## 快速开始
### 使用OpenAI SDK调用Claude
```python
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8090/v1", # 指向AxonHub
api_key="your-axonhub-api-key" # 使用AxonHub API密钥
)
# 使用OpenAI SDK调用Claude
response = client.chat.completions.create(
model="claude-3-5-sonnet", # 或 gpt-4, gemini-pro, deepseek-chat...
messages=[{"role": "user", "content": "Hello!"}]
)
```
## 相关链接
- 官方网站: https://github.com/looplj/axonhub
- GitHub: https://github.com/looplj/axonhub
- 文档: https://github.com/looplj/axonhub#readme
- Demo: https://axonhub.onrender.com (Email: demo@example.com, Password: 12345678)

29
apps/axonhub/data.yml Normal file
View File

@@ -0,0 +1,29 @@
name: AxonHub
tags:
- 开发工具
- AI工具
title: 一站式AI开发平台 - 统一API网关支持多种LLM提供商
description: 一站式AI开发平台 - 统一API网关支持多种LLM提供商
additionalProperties:
key: axonhub
name: AxonHub
tags:
- DevTool
- AI
shortDescZh: 一站式AI开发平台 - 统一API网关
shortDescEn: All-in-one AI Development Platform - Unified API Gateway
description:
en: AxonHub is an AI gateway that lets you switch between model providers without changing a single line of code. Use any SDK to call 100+ LLMs. Built-in failover, load balancing, cost control & end-to-end tracing.
zh: AxonHub是一个AI网关让您无需修改任何代码即可在模型提供商之间切换。使用任何SDK调用100+个LLM。内置故障转移、负载均衡、成本控制和端到端追踪。
ja: AxonHubは、コードを1行も変更せずにモデルプロバイダー間を切り替えることができるAIゲートウェイです。任意のSDKを使用して100以上のLLMを呼び出します。フェイルオーバー、負荷分散、コスト制御、エンドツーエンドのトレースを内蔵。
ko: AxonHub는 코드를 한 줄도 변경하지 않고 모델 제공자 간에 전환할 수 있는 AI 게이트웨이입니다. 모든 SDK를 사용하여 100개 이상의 LLM을 호출합니다. 장애 조치, 로드 밸런싱, 비용 제어 및 엔드 투 엔드 추적이 내장되어 있습니다.
type: website
crossVersionUpdate: true
limit: 0
recommend: 0
website: https://github.com/looplj/axonhub
github: https://github.com/looplj/axonhub
document: https://github.com/looplj/axonhub#readme
architectures:
- amd64
- arm64

View File

@@ -0,0 +1,28 @@
additionalProperties:
formFields:
- default: 8090
edit: true
envKey: PANEL_APP_PORT_HTTP
labelEn: Web Port
labelZh: Web端口
required: true
rule: paramPort
type: number
label:
en: Web Port
zh: Web端口
ja: Webポート
ko: Web 포트
- default: ""
edit: true
envKey: AXONHUB_DB_PASSWORD
labelEn: Database Password
labelZh: 数据库密码
required: false
rule: paramComplexity
type: password
label:
en: Database Password
zh: 数据库密码
ja: データベースパスワード
ko: 데이터베이스 비밀번호

View File

@@ -0,0 +1,20 @@
services:
axonhub:
image: looplj/axonhub:latest
container_name: ${CONTAINER_NAME}
restart: always
networks:
- 1panel-network
ports:
- "${PANEL_APP_PORT_HTTP}:8090"
volumes:
- ./data:/data
environment:
- TZ=Asia/Shanghai
- AXONHUB_DB_DIALECT=sqlite3
- AXONHUB_DB_DSN=file:/data/axonhub.db?cache=shared&_fk=1&pragma=journal_mode(WAL)
labels:
createdBy: "Apps"
networks:
1panel-network:
external: true

BIN
apps/axonhub/logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

View File

@@ -1,6 +1,6 @@
services:
beszel-agent:
image: henrygd/beszel-agent:0.18.4
image: henrygd/beszel-agent:0.18.7
container_name: ${CONTAINER_NAME}
restart: always
network_mode: host

View File

@@ -1,6 +1,6 @@
services:
blinko:
image: blinkospace/blinko:1.8.6
image: blinkospace/blinko:1.8.7
container_name: ${CONTAINER_NAME}
restart: always
networks:

View File

@@ -0,0 +1,57 @@
additionalProperties:
formFields:
- default: 8317
edit: true
envKey: PANEL_APP_PORT_HTTP
labelEn: Web UI Port
labelZh: Web界面端口
required: true
rule: paramPort
type: number
- default: 8085
edit: true
envKey: PANEL_APP_PORT_PROXY
labelEn: Proxy Port
labelZh: 代理端口
required: true
rule: paramPort
type: number
- default: 1455
edit: true
envKey: PANEL_APP_PORT_1455
labelEn: Additional Port 1455
labelZh: 额外端口 1455
required: true
rule: paramPort
type: number
- default: 54545
edit: true
envKey: PANEL_APP_PORT_54545
labelEn: Additional Port 54545
labelZh: 额外端口 54545
required: true
rule: paramPort
type: number
- default: 51121
edit: true
envKey: PANEL_APP_PORT_51121
labelEn: Additional Port 51121
labelZh: 额外端口 51121
required: true
rule: paramPort
type: number
- default: 11451
edit: true
envKey: PANEL_APP_PORT_11451
labelEn: Additional Port 11451
labelZh: 额外端口 11451
required: true
rule: paramPort
type: number
- default: Asia/Shanghai
edit: true
envKey: TZ
labelEn: Time Zone
labelZh: 时区
required: true
type: text

View File

@@ -0,0 +1,422 @@
# Server host/interface to bind to. Default is empty ("") to bind all interfaces (IPv4 + IPv6).
# Use "127.0.0.1" or "localhost" to restrict access to local machine only.
host: ''
# Server port
port: 8317
# TLS settings for HTTPS. When enabled, the server listens with the provided certificate and key.
tls:
enable: false
cert: ''
key: ''
# Management API settings
remote-management:
# Whether to allow remote (non-localhost) management access.
# When false, only localhost can access management endpoints (a key is still required).
allow-remote: false
# Management key. If a plaintext value is provided here, it will be hashed on startup.
# All management requests (even from localhost) require this key.
# Leave empty to disable the Management API entirely (404 for all /v0/management routes).
secret-key: ''
# Disable the bundled management control panel asset download and HTTP route when true.
disable-control-panel: false
# GitHub repository for the management control panel. Accepts a repository URL or releases API URL.
panel-github-repository: 'https://github.com/router-for-me/Cli-Proxy-API-Management-Center'
# Authentication directory (supports ~ for home directory)
auth-dir: '~/.cli-proxy-api'
# API keys for authentication
api-keys:
- 'your-api-key-1'
- 'your-api-key-2'
- 'your-api-key-3'
# Enable debug logging
debug: false
# Enable pprof HTTP debug server (host:port). Keep it bound to localhost for safety.
pprof:
enable: false
addr: '127.0.0.1:8316'
# When true, disable high-overhead HTTP middleware features to reduce per-request memory usage under high concurrency.
commercial-mode: false
# Open OAuth URLs in incognito/private browser mode.
# Useful when you want to login with a different account without logging out from your current session.
# Default: false (but Kiro auth defaults to true for multi-account support)
incognito-browser: true
# When true, write application logs to rotating files instead of stdout
logging-to-file: false
# Maximum total size (MB) of log files under the logs directory. When exceeded, the oldest log
# files are deleted until within the limit. Set to 0 to disable.
logs-max-total-size-mb: 0
# Maximum number of error log files retained when request logging is disabled.
# When exceeded, the oldest error log files are deleted. Default is 10. Set to 0 to disable cleanup.
error-logs-max-files: 10
# When false, disable in-memory usage statistics aggregation
usage-statistics-enabled: false
# Proxy URL. Supports socks5/http/https protocols. Example: socks5://user:pass@192.168.1.1:1080/
# Per-entry proxy-url also supports "direct" or "none" to bypass both the global proxy-url and environment proxies explicitly.
proxy-url: ""
# When true, unprefixed model requests only use credentials without a prefix (except when prefix == model name).
force-model-prefix: false
# When true, forward filtered upstream response headers to downstream clients.
# Default is false (disabled).
passthrough-headers: false
# Number of times to retry a request. Retries will occur if the HTTP response code is 403, 408, 500, 502, 503, or 504.
request-retry: 3
# Maximum number of different credentials to try for one failed request.
# Set to 0 to keep legacy behavior (try all available credentials).
max-retry-credentials: 0
# Maximum wait time in seconds for a cooled-down credential before triggering a retry.
max-retry-interval: 30
# Quota exceeded behavior
quota-exceeded:
switch-project: true # Whether to automatically switch to another project when a quota is exceeded
switch-preview-model: true # Whether to automatically switch to a preview model when a quota is exceeded
# Routing strategy for selecting credentials when multiple match.
routing:
strategy: 'round-robin' # round-robin (default), fill-first
# When true, enable authentication for the WebSocket API (/v1/ws).
ws-auth: false
# When > 0, emit blank lines every N seconds for non-streaming responses to prevent idle timeouts.
nonstream-keepalive-interval: 0
# Streaming behavior (SSE keep-alives + safe bootstrap retries).
# streaming:
# keepalive-seconds: 15 # Default: 0 (disabled). <= 0 disables keep-alives.
# bootstrap-retries: 1 # Default: 0 (disabled). Retries before first byte is sent.
# Gemini API keys
# gemini-api-key:
# - api-key: "AIzaSy...01"
# prefix: "test" # optional: require calls like "test/gemini-3-pro-preview" to target this credential
# base-url: "https://generativelanguage.googleapis.com"
# headers:
# X-Custom-Header: "custom-value"
# proxy-url: "socks5://proxy.example.com:1080"
# # proxy-url: "direct" # optional: explicit direct connect for this credential
# models:
# - name: "gemini-2.5-flash" # upstream model name
# alias: "gemini-flash" # client alias mapped to the upstream model
# excluded-models:
# - "gemini-2.5-pro" # exclude specific models from this provider (exact match)
# - "gemini-2.5-*" # wildcard matching prefix (e.g. gemini-2.5-flash, gemini-2.5-pro)
# - "*-preview" # wildcard matching suffix (e.g. gemini-3-pro-preview)
# - "*flash*" # wildcard matching substring (e.g. gemini-2.5-flash-lite)
# - api-key: "AIzaSy...02"
# Codex API keys
# codex-api-key:
# - api-key: "sk-atSM..."
# prefix: "test" # optional: require calls like "test/gpt-5-codex" to target this credential
# base-url: "https://www.example.com" # use the custom codex API endpoint
# headers:
# X-Custom-Header: "custom-value"
# proxy-url: "socks5://proxy.example.com:1080" # optional: per-key proxy override
# # proxy-url: "direct" # optional: explicit direct connect for this credential
# models:
# - name: "gpt-5-codex" # upstream model name
# alias: "codex-latest" # client alias mapped to the upstream model
# excluded-models:
# - "gpt-5.1" # exclude specific models (exact match)
# - "gpt-5-*" # wildcard matching prefix (e.g. gpt-5-medium, gpt-5-codex)
# - "*-mini" # wildcard matching suffix (e.g. gpt-5-codex-mini)
# - "*codex*" # wildcard matching substring (e.g. gpt-5-codex-low)
# Claude API keys
# claude-api-key:
# - api-key: "sk-atSM..." # use the official claude API key, no need to set the base url
# - api-key: "sk-atSM..."
# prefix: "test" # optional: require calls like "test/claude-sonnet-latest" to target this credential
# base-url: "https://www.example.com" # use the custom claude API endpoint
# headers:
# X-Custom-Header: "custom-value"
# proxy-url: "socks5://proxy.example.com:1080" # optional: per-key proxy override
# # proxy-url: "direct" # optional: explicit direct connect for this credential
# models:
# - name: "claude-3-5-sonnet-20241022" # upstream model name
# alias: "claude-sonnet-latest" # client alias mapped to the upstream model
# excluded-models:
# - "claude-opus-4-5-20251101" # exclude specific models (exact match)
# - "claude-3-*" # wildcard matching prefix (e.g. claude-3-7-sonnet-20250219)
# - "*-thinking" # wildcard matching suffix (e.g. claude-opus-4-5-thinking)
# - "*haiku*" # wildcard matching substring (e.g. claude-3-5-haiku-20241022)
# cloak: # optional: request cloaking for non-Claude-Code clients
# mode: "auto" # "auto" (default): cloak only when client is not Claude Code
# # "always": always apply cloaking
# # "never": never apply cloaking
# strict-mode: false # false (default): prepend Claude Code prompt to user system messages
# # true: strip all user system messages, keep only Claude Code prompt
# sensitive-words: # optional: words to obfuscate with zero-width characters
# - "API"
# - "proxy"
# cache-user-id: true # optional: default is false; set true to reuse cached user_id per API key instead of generating a random one each request
# Default headers for Claude API requests. Update when Claude Code releases new versions.
# In legacy mode, user-agent/package-version/runtime-version/timeout are used as fallbacks
# when the client omits them, while OS/arch remain runtime-derived. When
# stabilize-device-profile is enabled, OS/arch stay pinned to the baseline values below,
# while user-agent/package-version/runtime-version seed a software fingerprint that can
# still upgrade to newer official Claude client versions.
# claude-header-defaults:
# user-agent: "claude-cli/2.1.44 (external, sdk-cli)"
# package-version: "0.74.0"
# runtime-version: "v24.3.0"
# os: "MacOS"
# arch: "arm64"
# timeout: "600"
# stabilize-device-profile: false # optional, default false; set true to enable per-auth/API-key fingerprint pinning
# Default headers for Codex OAuth model requests.
# These are used only for file-backed/OAuth Codex requests when the client
# does not send the header. `user-agent` applies to HTTP and websocket requests;
# `beta-features` only applies to websocket requests. They do not apply to codex-api-key entries.
# codex-header-defaults:
# user-agent: "codex_cli_rs/0.114.0 (Mac OS 14.2.0; x86_64) vscode/1.111.0"
# beta-features: "multi_agent"
# Kiro (AWS CodeWhisperer) configuration
# Note: Kiro API currently only operates in us-east-1 region
#kiro:
# - token-file: "~/.aws/sso/cache/kiro-auth-token.json" # path to Kiro token file
# agent-task-type: "" # optional: "vibe" or empty (API default)
# start-url: "https://your-company.awsapps.com/start" # optional: IDC start URL (preset for login)
# region: "us-east-1" # optional: OIDC region for IDC login and token refresh
# - access-token: "aoaAAAAA..." # or provide tokens directly
# refresh-token: "aorAAAAA..."
# profile-arn: "arn:aws:codewhisperer:us-east-1:..."
# proxy-url: "socks5://proxy.example.com:1080" # optional: proxy override
# Kilocode (OAuth-based code assistant)
# Note: Kilocode uses OAuth device flow authentication.
# Use the CLI command: ./server --kilo-login
# This will save credentials to the auth directory (default: ~/.cli-proxy-api/)
# oauth-model-alias:
# kilo:
# - name: "minimax/minimax-m2.5:free"
# alias: "minimax-m2.5"
# - name: "z-ai/glm-5:free"
# alias: "glm-5"
# oauth-excluded-models:
# kilo:
# - "kilo-claude-opus-4-6" # exclude specific models (exact match)
# - "*:free" # wildcard matching suffix (e.g. all free models)
# OpenAI compatibility providers
# openai-compatibility:
# - name: "openrouter" # The name of the provider; it will be used in the user agent and other places.
# prefix: "test" # optional: require calls like "test/kimi-k2" to target this provider's credentials
# base-url: "https://openrouter.ai/api/v1" # The base URL of the provider.
# headers:
# X-Custom-Header: "custom-value"
# api-key-entries:
# - api-key: "sk-or-v1-...b780"
# proxy-url: "socks5://proxy.example.com:1080" # optional: per-key proxy override
# # proxy-url: "direct" # optional: explicit direct connect for this credential
# - api-key: "sk-or-v1-...b781" # without proxy-url
# models: # The models supported by the provider.
# - name: "moonshotai/kimi-k2:free" # The actual model name.
# alias: "kimi-k2" # The alias used in the API.
# thinking: # optional: omit to default to levels ["low","medium","high"]
# levels: ["low", "medium", "high"]
# # You may repeat the same alias to build an internal model pool.
# # The client still sees only one alias in the model list.
# # Requests to that alias will round-robin across the upstream names below,
# # and if the chosen upstream fails before producing output, the request will
# # continue with the next upstream model in the same alias pool.
# - name: "qwen3.5-plus"
# alias: "claude-opus-4.66"
# - name: "glm-5"
# alias: "claude-opus-4.66"
# - name: "kimi-k2.5"
# alias: "claude-opus-4.66"
# Vertex API keys (Vertex-compatible endpoints, base-url is optional)
# vertex-api-key:
# - api-key: "vk-123..." # x-goog-api-key header
# prefix: "test" # optional: require calls like "test/vertex-pro" to target this credential
# base-url: "https://example.com/api" # optional, e.g. https://zenmux.ai/api; falls back to Google Vertex when omitted
# proxy-url: "socks5://proxy.example.com:1080" # optional per-key proxy override
# # proxy-url: "direct" # optional: explicit direct connect for this credential
# headers:
# X-Custom-Header: "custom-value"
# models: # optional: map aliases to upstream model names
# - name: "gemini-2.5-flash" # upstream model name
# alias: "vertex-flash" # client-visible alias
# - name: "gemini-2.5-pro"
# alias: "vertex-pro"
# excluded-models: # optional: models to exclude from listing
# - "imagen-3.0-generate-002"
# - "imagen-*"
# Amp Integration
# ampcode:
# # Configure upstream URL for Amp CLI OAuth and management features
# upstream-url: "https://ampcode.com"
# # Optional: Override API key for Amp upstream (otherwise uses env or file)
# upstream-api-key: ""
# # Per-client upstream API key mapping
# # Maps client API keys (from top-level api-keys) to different Amp upstream API keys.
# # Useful when different clients need to use different Amp accounts/quotas.
# # If a client key isn't mapped, falls back to upstream-api-key (default behavior).
# upstream-api-keys:
# - upstream-api-key: "amp_key_for_team_a" # Upstream key to use for these clients
# api-keys: # Client keys that use this upstream key
# - "your-api-key-1"
# - "your-api-key-2"
# - upstream-api-key: "amp_key_for_team_b"
# api-keys:
# - "your-api-key-3"
# # Restrict Amp management routes (/api/auth, /api/user, etc.) to localhost only (default: false)
# restrict-management-to-localhost: false
# # Force model mappings to run before checking local API keys (default: false)
# force-model-mappings: false
# # Amp Model Mappings
# # Route unavailable Amp models to alternative models available in your local proxy.
# # Useful when Amp CLI requests models you don't have access to (e.g., Claude Opus 4.5)
# # but you have a similar model available (e.g., Claude Sonnet 4).
# model-mappings:
# - from: "claude-opus-4-5-20251101" # Model requested by Amp CLI
# to: "gemini-claude-opus-4-5-thinking" # Route to this available model instead
# - from: "claude-sonnet-4-5-20250929"
# to: "gemini-claude-sonnet-4-5-thinking"
# - from: "claude-haiku-4-5-20251001"
# to: "gemini-2.5-flash"
# Global OAuth model name aliases (per channel)
# These aliases rename model IDs for both model listing and request routing.
# Supported channels: gemini-cli, vertex, aistudio, antigravity, claude, codex, qwen, iflow, kiro, github-copilot, kimi.
# NOTE: Aliases do not apply to gemini-api-key, codex-api-key, claude-api-key, openai-compatibility, vertex-api-key, or ampcode.
# You can repeat the same name with different aliases to expose multiple client model names.
# oauth-model-alias:
# antigravity:
# - name: "rev19-uic3-1p"
# alias: "gemini-2.5-computer-use-preview-10-2025"
# - name: "gemini-3-pro-image"
# alias: "gemini-3-pro-image-preview"
# - name: "gemini-3-pro-high"
# alias: "gemini-3-pro-preview"
# - name: "gemini-3-flash"
# alias: "gemini-3-flash-preview"
# - name: "claude-sonnet-4-5"
# alias: "gemini-claude-sonnet-4-5"
# - name: "claude-sonnet-4-5-thinking"
# alias: "gemini-claude-sonnet-4-5-thinking"
# - name: "claude-opus-4-5-thinking"
# alias: "gemini-claude-opus-4-5-thinking"
# gemini-cli:
# - name: "gemini-2.5-pro" # original model name under this channel
# alias: "g2.5p" # client-visible alias
# fork: true # when true, keep original and also add the alias as an extra model (default: false)
# vertex:
# - name: "gemini-2.5-pro"
# alias: "g2.5p"
# aistudio:
# - name: "gemini-2.5-pro"
# alias: "g2.5p"
# claude:
# - name: "claude-sonnet-4-5-20250929"
# alias: "cs4.5"
# codex:
# - name: "gpt-5"
# alias: "g5"
# qwen:
# - name: "qwen3-coder-plus"
# alias: "qwen-plus"
# iflow:
# - name: "glm-4.7"
# alias: "glm-god"
# kimi:
# - name: "kimi-k2.5"
# alias: "k2.5"
# kiro:
# - name: "kiro-claude-opus-4-5"
# alias: "op45"
# github-copilot:
# - name: "gpt-5"
# alias: "copilot-gpt5"
# OAuth provider excluded models
# Supported channels: gemini-cli, vertex, aistudio, antigravity, claude, codex, qwen, iflow, kiro, github-copilot.
# oauth-excluded-models:
# gemini-cli:
# - "gemini-2.5-pro" # exclude specific models (exact match)
# - "gemini-2.5-*" # wildcard matching prefix (e.g. gemini-2.5-flash, gemini-2.5-pro)
# - "*-preview" # wildcard matching suffix (e.g. gemini-3-pro-preview)
# - "*flash*" # wildcard matching substring (e.g. gemini-2.5-flash-lite)
# vertex:
# - "gemini-3-pro-preview"
# aistudio:
# - "gemini-3-pro-preview"
# antigravity:
# - "gemini-3-pro-preview"
# claude:
# - "claude-3-5-haiku-20241022"
# codex:
# - "gpt-5-codex-mini"
# qwen:
# - "vision-model"
# iflow:
# - "tstars2.0"
# kimi:
# - "kimi-k2-thinking"
# kiro:
# - "kiro-claude-haiku-4-5"
# github-copilot:
# - "raptor-mini"
# Optional payload configuration
# payload:
# default: # Default rules only set parameters when they are missing in the payload.
# - models:
# - name: "gemini-2.5-pro" # Supports wildcards (e.g., "gemini-*")
# protocol: "gemini" # restricts the rule to a specific protocol, options: openai, gemini, claude, codex, antigravity
# params: # JSON path (gjson/sjson syntax) -> value
# "generationConfig.thinkingConfig.thinkingBudget": 32768
# default-raw: # Default raw rules set parameters using raw JSON when missing (must be valid JSON).
# - models:
# - name: "gemini-2.5-pro" # Supports wildcards (e.g., "gemini-*")
# protocol: "gemini" # restricts the rule to a specific protocol, options: openai, gemini, claude, codex, antigravity
# params: # JSON path (gjson/sjson syntax) -> raw JSON value (strings are used as-is, must be valid JSON)
# "generationConfig.responseJsonSchema": "{\"type\":\"object\",\"properties\":{\"answer\":{\"type\":\"string\"}}}"
# override: # Override rules always set parameters, overwriting any existing values.
# - models:
# - name: "gpt-*" # Supports wildcards (e.g., "gpt-*")
# protocol: "codex" # restricts the rule to a specific protocol, options: openai, gemini, claude, codex, antigravity
# params: # JSON path (gjson/sjson syntax) -> value
# "reasoning.effort": "high"
# override-raw: # Override raw rules always set parameters using raw JSON (must be valid JSON).
# - models:
# - name: "gpt-*" # Supports wildcards (e.g., "gpt-*")
# protocol: "codex" # restricts the rule to a specific protocol, options: openai, gemini, claude, codex, antigravity
# params: # JSON path (gjson/sjson syntax) -> raw JSON value (strings are used as-is, must be valid JSON)
# "response_format": "{\"type\":\"json_schema\",\"json_schema\":{\"name\":\"answer\",\"schema\":{\"type\":\"object\"}}}"
# filter: # Filter rules remove specified parameters from the payload.
# - models:
# - name: "gemini-2.5-pro" # Supports wildcards (e.g., "gemini-*")
# protocol: "gemini" # restricts the rule to a specific protocol, options: openai, gemini, claude, codex, antigravity
# params: # JSON paths (gjson/sjson syntax) to remove from the payload
# - "generationConfig.thinkingConfig.thinkingBudget"
# - "generationConfig.responseJsonSchema"

View File

@@ -0,0 +1,25 @@
services:
cliproxyapi-plus:
image: eceasy/cli-proxy-api-plus:v6.9.23-0
container_name: ${CONTAINER_NAME}
restart: always
networks:
- 1panel-network
ports:
- ${PANEL_APP_PORT_HTTP}:8317
- ${PANEL_APP_PORT_PROXY}:8085
- ${PANEL_APP_PORT_1455}:1455
- ${PANEL_APP_PORT_54545}:54545
- ${PANEL_APP_PORT_51121}:51121
- ${PANEL_APP_PORT_11451}:11451
volumes:
- ./data/config.yaml:/CLIProxyAPI/config.yaml
- ./data/auths:/root/.cli-proxy-api
- ./data/logs:/CLIProxyAPI/logs
environment:
- TZ=${TZ}
labels:
createdBy: Apps
networks:
1panel-network:
external: true

View File

@@ -0,0 +1,105 @@
# CLIProxyAPI Plus
CLIProxyAPI Plus 是 CLIProxyAPI 的增强版本,在主线项目基础上添加了第三方提供商支持。所有第三方提供商支持由社区贡献者维护。
## 功能特点
- 支持多种 AI 模型提供商Claude、Gemini、Codex、Qwen 等)
- 支持第三方提供商扩展
- OAuth 认证支持
- 高性能代理设计
- Web 管理界面
- 灵活的路由和负载均衡策略
## 使用说明
### 默认端口
- **Web UI 端口 (8317)**: 主要的 Web 管理界面和 API 端口
- Web 管理界面: `http://localhost:8317/management.html`
- API 端点: `http://localhost:8317/v1`
- **代理端口 (8085)**: 代理服务端口
- **额外端口**: 1455, 54545, 51121, 11451用于特定功能扩展
### Web 管理界面
部署后,访问 Web 管理界面需要以下步骤:
1. **编辑配置文件** `./data/config.yaml`
-`remote-management.allow-remote` 设置为 `true` 以允许远程访问
- 设置 `remote-management.secret-key` 为您的管理密钥
2. **访问地址**(替换为您的服务器 IP
```
http://your-server-ip:8317/management.html
```
**注意**:默认 `allow-remote``false`,仅允许本地访问。如需从其他机器访问,请务必设置为 `true` 并配置强密码。
### 配置文件
应用数据存储在 `./data` 目录,包含:
- `config.yaml` - 主配置文件
- API 密钥配置
- 提供商设置
- 路由策略
- 代理设置
- `auths/` - OAuth 认证信息存储目录
- `logs/` - 应用日志目录
### 快速配置
1. 编辑 `./data/config.yaml` 文件
2.`api-keys` 部分添加您的 API 密钥
3. 如需远程访问,设置 `remote-management.allow-remote: true``remote-management.secret-key`
4. 根据需要配置各个提供商Claude、Gemini、Codex 等)
5. 重启应用使配置生效
### 主要配置项
```yaml
# API 密钥
api-keys:
- 'your-api-key-1'
- 'your-api-key-2'
# 管理界面设置
remote-management:
allow-remote: false # 是否允许远程管理true=允许false=仅本地
secret-key: '' # 管理密钥(首次启动后会被哈希)
disable-control-panel: false
# 代理设置
proxy-url: "" # 全局代理 URL
# 路由策略
routing:
strategy: 'round-robin' # round-robin 或 fill-first
```
## 版本说明
- **latest**: 最新开发版本
- **6.9.9-0**: 最新稳定版本(推荐)
## 相关链接
- 官方文档: https://help.router-for.me/cn/introduction/quick-start.html
- Web UI 文档: https://help.router-for.me/cn/management/webui.html
- GitHub: https://github.com/router-for-me/CLIProxyAPIPlus
- 问题反馈: https://github.com/router-for-me/CLIProxyAPIPlus/issues
## 注意事项
1. 首次部署后请及时修改 `api-keys` 和管理密钥
2. 如需远程访问,请设置 `allow-remote: true` 并配置强密码
3. 生产环境建议在使用完毕后将 `allow-remote` 改回 `false` 以提高安全性
4. 如需使用 OAuth 认证,请确保 `auths/` 目录有正确的读写权限
5. 生产环境建议配置 TLS 加密
## 技术支持
- 主线项目问题: 请在主线仓库提交 Issue
- Plus 版本第三方提供商问题: 请联系相应的社区维护者

View File

@@ -0,0 +1,105 @@
# CLIProxyAPI Plus
CLIProxyAPI Plus is an enhanced version of CLIProxyAPI, adding support for third-party providers on top of the mainline project. All third-party provider support is maintained by community contributors.
## Features
- Support for multiple AI model providers (Claude, Gemini, Codex, Qwen, etc.)
- Third-party provider extensions
- OAuth authentication support
- High-performance proxy design
- Web management interface
- Flexible routing and load balancing strategies
## Usage
### Default Ports
- **Web UI Port (8317)**: Primary Web management interface and API port
- Web Management UI: `http://localhost:8317/management.html`
- API Endpoint: `http://localhost:8317/v1`
- **Proxy Port (8085)**: Proxy service port
- **Additional Ports**: 1455, 54545, 51121, 11451 (for specific feature extensions)
### Web Management Interface
To access the Web management interface after deployment:
1. **Edit config file** `./data/config.yaml`
- Set `remote-management.allow-remote` to `true` to allow remote access
- Set `remote-management.secret-key` to your management secret
2. **Access URL** (replace with your server IP):
```
http://your-server-ip:8317/management.html
```
**Note**: Default `allow-remote` is `false`, only local access is allowed. To access from other machines, please set to `true` and configure a strong password.
### Configuration Files
Application data is stored in the `./data` directory:
- `config.yaml` - Main configuration file
- API key configuration
- Provider settings
- Routing strategy
- Proxy settings
- `auths/` - OAuth authentication storage directory
- `logs/` - Application logs directory
### Quick Configuration
1. Edit the `./data/config.yaml` file
2. Add your API keys in the `api-keys` section
3. For remote access, set `remote-management.allow-remote: true` and `remote-management.secret-key`
4. Configure providers as needed (Claude, Gemini, Codex, etc.)
5. Restart the application for changes to take effect
### Key Configuration Items
```yaml
# API Keys
api-keys:
- 'your-api-key-1'
- 'your-api-key-2'
# Management interface settings
remote-management:
allow-remote: false # Allow remote management, true=allow, false=local only
secret-key: '' # Management key (will be hashed after first startup)
disable-control-panel: false
# Proxy settings
proxy-url: "" # Global proxy URL
# Routing strategy
routing:
strategy: 'round-robin' # round-robin or fill-first
```
## Version Information
- **latest**: Latest development version
- **6.9.9-0**: Latest stable version (recommended)
## Links
- Official Documentation: https://help.router-for.me/cn/introduction/quick-start.html
- Web UI Documentation: https://help.router-for.me/cn/management/webui.html
- GitHub: https://github.com/router-for-me/CLIProxyAPIPlus
- Issue Tracker: https://github.com/router-for-me/CLIProxyAPIPlus/issues
## Important Notes
1. Please modify `api-keys` and management key promptly after first deployment
2. For remote access, set `allow-remote: true` and configure a strong password
3. In production, it's recommended to set `allow-remote` back to `false` after use for better security
4. For OAuth authentication, ensure the `auths/` directory has proper read/write permissions
5. TLS encryption is recommended for production environments
## Support
- Mainline project issues: Please submit issues to the mainline repository
- Plus version third-party provider issues: Please contact the corresponding community maintainer

View File

@@ -0,0 +1,24 @@
name: CLIProxyAPI Plus
tags:
- 网络工具
- 代理服务
title: CLIProxyAPI Plus - 代理API服务
description: CLIProxyAPI Plus - 代理API服务
additionalProperties:
key: cliproxyapi-plus
name: CLIProxyAPI Plus
tags:
- Proxy
- Network
shortDescZh: CLIProxyAPI Plus 代理API服务
shortDescEn: CLIProxyAPI Plus Proxy API Service
type: website
crossVersionUpdate: true
limit: 0
recommend: 0
website: https://github.com/router-for-me/CLIProxyAPIPlus
github: https://github.com/router-for-me/CLIProxyAPIPlus
document: https://help.router-for.me/cn/introduction/quick-start.html
architectures:
- amd64
- arm64

View File

@@ -0,0 +1,57 @@
additionalProperties:
formFields:
- default: 8317
edit: true
envKey: PANEL_APP_PORT_HTTP
labelEn: Web UI Port
labelZh: Web界面端口
required: true
rule: paramPort
type: number
- default: 8085
edit: true
envKey: PANEL_APP_PORT_PROXY
labelEn: Proxy Port
labelZh: 代理端口
required: true
rule: paramPort
type: number
- default: 1455
edit: true
envKey: PANEL_APP_PORT_1455
labelEn: Additional Port 1455
labelZh: 额外端口 1455
required: true
rule: paramPort
type: number
- default: 54545
edit: true
envKey: PANEL_APP_PORT_54545
labelEn: Additional Port 54545
labelZh: 额外端口 54545
required: true
rule: paramPort
type: number
- default: 51121
edit: true
envKey: PANEL_APP_PORT_51121
labelEn: Additional Port 51121
labelZh: 额外端口 51121
required: true
rule: paramPort
type: number
- default: 11451
edit: true
envKey: PANEL_APP_PORT_11451
labelEn: Additional Port 11451
labelZh: 额外端口 11451
required: true
rule: paramPort
type: number
- default: Asia/Shanghai
edit: true
envKey: TZ
labelEn: Time Zone
labelZh: 时区
required: true
type: text

View File

@@ -0,0 +1,422 @@
# Server host/interface to bind to. Default is empty ("") to bind all interfaces (IPv4 + IPv6).
# Use "127.0.0.1" or "localhost" to restrict access to local machine only.
host: ''
# Server port
port: 8317
# TLS settings for HTTPS. When enabled, the server listens with the provided certificate and key.
tls:
enable: false
cert: ''
key: ''
# Management API settings
remote-management:
# Whether to allow remote (non-localhost) management access.
# When false, only localhost can access management endpoints (a key is still required).
allow-remote: false
# Management key. If a plaintext value is provided here, it will be hashed on startup.
# All management requests (even from localhost) require this key.
# Leave empty to disable the Management API entirely (404 for all /v0/management routes).
secret-key: ''
# Disable the bundled management control panel asset download and HTTP route when true.
disable-control-panel: false
# GitHub repository for the management control panel. Accepts a repository URL or releases API URL.
panel-github-repository: 'https://github.com/router-for-me/Cli-Proxy-API-Management-Center'
# Authentication directory (supports ~ for home directory)
auth-dir: '~/.cli-proxy-api'
# API keys for authentication
api-keys:
- 'your-api-key-1'
- 'your-api-key-2'
- 'your-api-key-3'
# Enable debug logging
debug: false
# Enable pprof HTTP debug server (host:port). Keep it bound to localhost for safety.
pprof:
enable: false
addr: '127.0.0.1:8316'
# When true, disable high-overhead HTTP middleware features to reduce per-request memory usage under high concurrency.
commercial-mode: false
# Open OAuth URLs in incognito/private browser mode.
# Useful when you want to login with a different account without logging out from your current session.
# Default: false (but Kiro auth defaults to true for multi-account support)
incognito-browser: true
# When true, write application logs to rotating files instead of stdout
logging-to-file: false
# Maximum total size (MB) of log files under the logs directory. When exceeded, the oldest log
# files are deleted until within the limit. Set to 0 to disable.
logs-max-total-size-mb: 0
# Maximum number of error log files retained when request logging is disabled.
# When exceeded, the oldest error log files are deleted. Default is 10. Set to 0 to disable cleanup.
error-logs-max-files: 10
# When false, disable in-memory usage statistics aggregation
usage-statistics-enabled: false
# Proxy URL. Supports socks5/http/https protocols. Example: socks5://user:pass@192.168.1.1:1080/
# Per-entry proxy-url also supports "direct" or "none" to bypass both the global proxy-url and environment proxies explicitly.
proxy-url: ""
# When true, unprefixed model requests only use credentials without a prefix (except when prefix == model name).
force-model-prefix: false
# When true, forward filtered upstream response headers to downstream clients.
# Default is false (disabled).
passthrough-headers: false
# Number of times to retry a request. Retries will occur if the HTTP response code is 403, 408, 500, 502, 503, or 504.
request-retry: 3
# Maximum number of different credentials to try for one failed request.
# Set to 0 to keep legacy behavior (try all available credentials).
max-retry-credentials: 0
# Maximum wait time in seconds for a cooled-down credential before triggering a retry.
max-retry-interval: 30
# Quota exceeded behavior
quota-exceeded:
switch-project: true # Whether to automatically switch to another project when a quota is exceeded
switch-preview-model: true # Whether to automatically switch to a preview model when a quota is exceeded
# Routing strategy for selecting credentials when multiple match.
routing:
strategy: 'round-robin' # round-robin (default), fill-first
# When true, enable authentication for the WebSocket API (/v1/ws).
ws-auth: false
# When > 0, emit blank lines every N seconds for non-streaming responses to prevent idle timeouts.
nonstream-keepalive-interval: 0
# Streaming behavior (SSE keep-alives + safe bootstrap retries).
# streaming:
# keepalive-seconds: 15 # Default: 0 (disabled). <= 0 disables keep-alives.
# bootstrap-retries: 1 # Default: 0 (disabled). Retries before first byte is sent.
# Gemini API keys
# gemini-api-key:
# - api-key: "AIzaSy...01"
# prefix: "test" # optional: require calls like "test/gemini-3-pro-preview" to target this credential
# base-url: "https://generativelanguage.googleapis.com"
# headers:
# X-Custom-Header: "custom-value"
# proxy-url: "socks5://proxy.example.com:1080"
# # proxy-url: "direct" # optional: explicit direct connect for this credential
# models:
# - name: "gemini-2.5-flash" # upstream model name
# alias: "gemini-flash" # client alias mapped to the upstream model
# excluded-models:
# - "gemini-2.5-pro" # exclude specific models from this provider (exact match)
# - "gemini-2.5-*" # wildcard matching prefix (e.g. gemini-2.5-flash, gemini-2.5-pro)
# - "*-preview" # wildcard matching suffix (e.g. gemini-3-pro-preview)
# - "*flash*" # wildcard matching substring (e.g. gemini-2.5-flash-lite)
# - api-key: "AIzaSy...02"
# Codex API keys
# codex-api-key:
# - api-key: "sk-atSM..."
# prefix: "test" # optional: require calls like "test/gpt-5-codex" to target this credential
# base-url: "https://www.example.com" # use the custom codex API endpoint
# headers:
# X-Custom-Header: "custom-value"
# proxy-url: "socks5://proxy.example.com:1080" # optional: per-key proxy override
# # proxy-url: "direct" # optional: explicit direct connect for this credential
# models:
# - name: "gpt-5-codex" # upstream model name
# alias: "codex-latest" # client alias mapped to the upstream model
# excluded-models:
# - "gpt-5.1" # exclude specific models (exact match)
# - "gpt-5-*" # wildcard matching prefix (e.g. gpt-5-medium, gpt-5-codex)
# - "*-mini" # wildcard matching suffix (e.g. gpt-5-codex-mini)
# - "*codex*" # wildcard matching substring (e.g. gpt-5-codex-low)
# Claude API keys
# claude-api-key:
# - api-key: "sk-atSM..." # use the official claude API key, no need to set the base url
# - api-key: "sk-atSM..."
# prefix: "test" # optional: require calls like "test/claude-sonnet-latest" to target this credential
# base-url: "https://www.example.com" # use the custom claude API endpoint
# headers:
# X-Custom-Header: "custom-value"
# proxy-url: "socks5://proxy.example.com:1080" # optional: per-key proxy override
# # proxy-url: "direct" # optional: explicit direct connect for this credential
# models:
# - name: "claude-3-5-sonnet-20241022" # upstream model name
# alias: "claude-sonnet-latest" # client alias mapped to the upstream model
# excluded-models:
# - "claude-opus-4-5-20251101" # exclude specific models (exact match)
# - "claude-3-*" # wildcard matching prefix (e.g. claude-3-7-sonnet-20250219)
# - "*-thinking" # wildcard matching suffix (e.g. claude-opus-4-5-thinking)
# - "*haiku*" # wildcard matching substring (e.g. claude-3-5-haiku-20241022)
# cloak: # optional: request cloaking for non-Claude-Code clients
# mode: "auto" # "auto" (default): cloak only when client is not Claude Code
# # "always": always apply cloaking
# # "never": never apply cloaking
# strict-mode: false # false (default): prepend Claude Code prompt to user system messages
# # true: strip all user system messages, keep only Claude Code prompt
# sensitive-words: # optional: words to obfuscate with zero-width characters
# - "API"
# - "proxy"
# cache-user-id: true # optional: default is false; set true to reuse cached user_id per API key instead of generating a random one each request
# Default headers for Claude API requests. Update when Claude Code releases new versions.
# In legacy mode, user-agent/package-version/runtime-version/timeout are used as fallbacks
# when the client omits them, while OS/arch remain runtime-derived. When
# stabilize-device-profile is enabled, OS/arch stay pinned to the baseline values below,
# while user-agent/package-version/runtime-version seed a software fingerprint that can
# still upgrade to newer official Claude client versions.
# claude-header-defaults:
# user-agent: "claude-cli/2.1.44 (external, sdk-cli)"
# package-version: "0.74.0"
# runtime-version: "v24.3.0"
# os: "MacOS"
# arch: "arm64"
# timeout: "600"
# stabilize-device-profile: false # optional, default false; set true to enable per-auth/API-key fingerprint pinning
# Default headers for Codex OAuth model requests.
# These are used only for file-backed/OAuth Codex requests when the client
# does not send the header. `user-agent` applies to HTTP and websocket requests;
# `beta-features` only applies to websocket requests. They do not apply to codex-api-key entries.
# codex-header-defaults:
# user-agent: "codex_cli_rs/0.114.0 (Mac OS 14.2.0; x86_64) vscode/1.111.0"
# beta-features: "multi_agent"
# Kiro (AWS CodeWhisperer) configuration
# Note: Kiro API currently only operates in us-east-1 region
#kiro:
# - token-file: "~/.aws/sso/cache/kiro-auth-token.json" # path to Kiro token file
# agent-task-type: "" # optional: "vibe" or empty (API default)
# start-url: "https://your-company.awsapps.com/start" # optional: IDC start URL (preset for login)
# region: "us-east-1" # optional: OIDC region for IDC login and token refresh
# - access-token: "aoaAAAAA..." # or provide tokens directly
# refresh-token: "aorAAAAA..."
# profile-arn: "arn:aws:codewhisperer:us-east-1:..."
# proxy-url: "socks5://proxy.example.com:1080" # optional: proxy override
# Kilocode (OAuth-based code assistant)
# Note: Kilocode uses OAuth device flow authentication.
# Use the CLI command: ./server --kilo-login
# This will save credentials to the auth directory (default: ~/.cli-proxy-api/)
# oauth-model-alias:
# kilo:
# - name: "minimax/minimax-m2.5:free"
# alias: "minimax-m2.5"
# - name: "z-ai/glm-5:free"
# alias: "glm-5"
# oauth-excluded-models:
# kilo:
# - "kilo-claude-opus-4-6" # exclude specific models (exact match)
# - "*:free" # wildcard matching suffix (e.g. all free models)
# OpenAI compatibility providers
# openai-compatibility:
# - name: "openrouter" # The name of the provider; it will be used in the user agent and other places.
# prefix: "test" # optional: require calls like "test/kimi-k2" to target this provider's credentials
# base-url: "https://openrouter.ai/api/v1" # The base URL of the provider.
# headers:
# X-Custom-Header: "custom-value"
# api-key-entries:
# - api-key: "sk-or-v1-...b780"
# proxy-url: "socks5://proxy.example.com:1080" # optional: per-key proxy override
# # proxy-url: "direct" # optional: explicit direct connect for this credential
# - api-key: "sk-or-v1-...b781" # without proxy-url
# models: # The models supported by the provider.
# - name: "moonshotai/kimi-k2:free" # The actual model name.
# alias: "kimi-k2" # The alias used in the API.
# thinking: # optional: omit to default to levels ["low","medium","high"]
# levels: ["low", "medium", "high"]
# # You may repeat the same alias to build an internal model pool.
# # The client still sees only one alias in the model list.
# # Requests to that alias will round-robin across the upstream names below,
# # and if the chosen upstream fails before producing output, the request will
# # continue with the next upstream model in the same alias pool.
# - name: "qwen3.5-plus"
# alias: "claude-opus-4.66"
# - name: "glm-5"
# alias: "claude-opus-4.66"
# - name: "kimi-k2.5"
# alias: "claude-opus-4.66"
# Vertex API keys (Vertex-compatible endpoints, base-url is optional)
# vertex-api-key:
# - api-key: "vk-123..." # x-goog-api-key header
# prefix: "test" # optional: require calls like "test/vertex-pro" to target this credential
# base-url: "https://example.com/api" # optional, e.g. https://zenmux.ai/api; falls back to Google Vertex when omitted
# proxy-url: "socks5://proxy.example.com:1080" # optional per-key proxy override
# # proxy-url: "direct" # optional: explicit direct connect for this credential
# headers:
# X-Custom-Header: "custom-value"
# models: # optional: map aliases to upstream model names
# - name: "gemini-2.5-flash" # upstream model name
# alias: "vertex-flash" # client-visible alias
# - name: "gemini-2.5-pro"
# alias: "vertex-pro"
# excluded-models: # optional: models to exclude from listing
# - "imagen-3.0-generate-002"
# - "imagen-*"
# Amp Integration
# ampcode:
# # Configure upstream URL for Amp CLI OAuth and management features
# upstream-url: "https://ampcode.com"
# # Optional: Override API key for Amp upstream (otherwise uses env or file)
# upstream-api-key: ""
# # Per-client upstream API key mapping
# # Maps client API keys (from top-level api-keys) to different Amp upstream API keys.
# # Useful when different clients need to use different Amp accounts/quotas.
# # If a client key isn't mapped, falls back to upstream-api-key (default behavior).
# upstream-api-keys:
# - upstream-api-key: "amp_key_for_team_a" # Upstream key to use for these clients
# api-keys: # Client keys that use this upstream key
# - "your-api-key-1"
# - "your-api-key-2"
# - upstream-api-key: "amp_key_for_team_b"
# api-keys:
# - "your-api-key-3"
# # Restrict Amp management routes (/api/auth, /api/user, etc.) to localhost only (default: false)
# restrict-management-to-localhost: false
# # Force model mappings to run before checking local API keys (default: false)
# force-model-mappings: false
# # Amp Model Mappings
# # Route unavailable Amp models to alternative models available in your local proxy.
# # Useful when Amp CLI requests models you don't have access to (e.g., Claude Opus 4.5)
# # but you have a similar model available (e.g., Claude Sonnet 4).
# model-mappings:
# - from: "claude-opus-4-5-20251101" # Model requested by Amp CLI
# to: "gemini-claude-opus-4-5-thinking" # Route to this available model instead
# - from: "claude-sonnet-4-5-20250929"
# to: "gemini-claude-sonnet-4-5-thinking"
# - from: "claude-haiku-4-5-20251001"
# to: "gemini-2.5-flash"
# Global OAuth model name aliases (per channel)
# These aliases rename model IDs for both model listing and request routing.
# Supported channels: gemini-cli, vertex, aistudio, antigravity, claude, codex, qwen, iflow, kiro, github-copilot, kimi.
# NOTE: Aliases do not apply to gemini-api-key, codex-api-key, claude-api-key, openai-compatibility, vertex-api-key, or ampcode.
# You can repeat the same name with different aliases to expose multiple client model names.
# oauth-model-alias:
# antigravity:
# - name: "rev19-uic3-1p"
# alias: "gemini-2.5-computer-use-preview-10-2025"
# - name: "gemini-3-pro-image"
# alias: "gemini-3-pro-image-preview"
# - name: "gemini-3-pro-high"
# alias: "gemini-3-pro-preview"
# - name: "gemini-3-flash"
# alias: "gemini-3-flash-preview"
# - name: "claude-sonnet-4-5"
# alias: "gemini-claude-sonnet-4-5"
# - name: "claude-sonnet-4-5-thinking"
# alias: "gemini-claude-sonnet-4-5-thinking"
# - name: "claude-opus-4-5-thinking"
# alias: "gemini-claude-opus-4-5-thinking"
# gemini-cli:
# - name: "gemini-2.5-pro" # original model name under this channel
# alias: "g2.5p" # client-visible alias
# fork: true # when true, keep original and also add the alias as an extra model (default: false)
# vertex:
# - name: "gemini-2.5-pro"
# alias: "g2.5p"
# aistudio:
# - name: "gemini-2.5-pro"
# alias: "g2.5p"
# claude:
# - name: "claude-sonnet-4-5-20250929"
# alias: "cs4.5"
# codex:
# - name: "gpt-5"
# alias: "g5"
# qwen:
# - name: "qwen3-coder-plus"
# alias: "qwen-plus"
# iflow:
# - name: "glm-4.7"
# alias: "glm-god"
# kimi:
# - name: "kimi-k2.5"
# alias: "k2.5"
# kiro:
# - name: "kiro-claude-opus-4-5"
# alias: "op45"
# github-copilot:
# - name: "gpt-5"
# alias: "copilot-gpt5"
# OAuth provider excluded models
# Supported channels: gemini-cli, vertex, aistudio, antigravity, claude, codex, qwen, iflow, kiro, github-copilot.
# oauth-excluded-models:
# gemini-cli:
# - "gemini-2.5-pro" # exclude specific models (exact match)
# - "gemini-2.5-*" # wildcard matching prefix (e.g. gemini-2.5-flash, gemini-2.5-pro)
# - "*-preview" # wildcard matching suffix (e.g. gemini-3-pro-preview)
# - "*flash*" # wildcard matching substring (e.g. gemini-2.5-flash-lite)
# vertex:
# - "gemini-3-pro-preview"
# aistudio:
# - "gemini-3-pro-preview"
# antigravity:
# - "gemini-3-pro-preview"
# claude:
# - "claude-3-5-haiku-20241022"
# codex:
# - "gpt-5-codex-mini"
# qwen:
# - "vision-model"
# iflow:
# - "tstars2.0"
# kimi:
# - "kimi-k2-thinking"
# kiro:
# - "kiro-claude-haiku-4-5"
# github-copilot:
# - "raptor-mini"
# Optional payload configuration
# payload:
# default: # Default rules only set parameters when they are missing in the payload.
# - models:
# - name: "gemini-2.5-pro" # Supports wildcards (e.g., "gemini-*")
# protocol: "gemini" # restricts the rule to a specific protocol, options: openai, gemini, claude, codex, antigravity
# params: # JSON path (gjson/sjson syntax) -> value
# "generationConfig.thinkingConfig.thinkingBudget": 32768
# default-raw: # Default raw rules set parameters using raw JSON when missing (must be valid JSON).
# - models:
# - name: "gemini-2.5-pro" # Supports wildcards (e.g., "gemini-*")
# protocol: "gemini" # restricts the rule to a specific protocol, options: openai, gemini, claude, codex, antigravity
# params: # JSON path (gjson/sjson syntax) -> raw JSON value (strings are used as-is, must be valid JSON)
# "generationConfig.responseJsonSchema": "{\"type\":\"object\",\"properties\":{\"answer\":{\"type\":\"string\"}}}"
# override: # Override rules always set parameters, overwriting any existing values.
# - models:
# - name: "gpt-*" # Supports wildcards (e.g., "gpt-*")
# protocol: "codex" # restricts the rule to a specific protocol, options: openai, gemini, claude, codex, antigravity
# params: # JSON path (gjson/sjson syntax) -> value
# "reasoning.effort": "high"
# override-raw: # Override raw rules always set parameters using raw JSON (must be valid JSON).
# - models:
# - name: "gpt-*" # Supports wildcards (e.g., "gpt-*")
# protocol: "codex" # restricts the rule to a specific protocol, options: openai, gemini, claude, codex, antigravity
# params: # JSON path (gjson/sjson syntax) -> raw JSON value (strings are used as-is, must be valid JSON)
# "response_format": "{\"type\":\"json_schema\",\"json_schema\":{\"name\":\"answer\",\"schema\":{\"type\":\"object\"}}}"
# filter: # Filter rules remove specified parameters from the payload.
# - models:
# - name: "gemini-2.5-pro" # Supports wildcards (e.g., "gemini-*")
# protocol: "gemini" # restricts the rule to a specific protocol, options: openai, gemini, claude, codex, antigravity
# params: # JSON paths (gjson/sjson syntax) to remove from the payload
# - "generationConfig.thinkingConfig.thinkingBudget"
# - "generationConfig.responseJsonSchema"

View File

@@ -0,0 +1,26 @@
services:
cliproxyapi-plus:
image: eceasy/cli-proxy-api-plus:latest
container_name: ${CONTAINER_NAME}
restart: always
networks:
- 1panel-network
ports:
- "${PANEL_APP_PORT_HTTP}:8317"
- "${PANEL_APP_PORT_PROXY}:8085"
- "${PANEL_APP_PORT_1455}:1455"
- "${PANEL_APP_PORT_54545}:54545"
- "${PANEL_APP_PORT_51121}:51121"
- "${PANEL_APP_PORT_11451}:11451"
volumes:
- ./data/config.yaml:/CLIProxyAPI/config.yaml
- ./data/auths:/root/.cli-proxy-api
- ./data/logs:/CLIProxyAPI/logs
environment:
- TZ=${TZ}
labels:
createdBy: "Apps"
networks:
1panel-network:
external: true

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

View File

@@ -0,0 +1,29 @@
additionalProperties:
formFields:
- default: 9100
edit: true
envKey: PANEL_APP_PORT_HTTP
labelEn: Web Port
labelZh: Web端口
required: true
rule: paramPort
type: number
label:
en: Web Port
zh: Web端口
ja: Webポート
ko: Web 포트
- default: "Craft-Agents-"
edit: true
envKey: CRAFT_SERVER_TOKEN
labelEn: Server Token
labelZh: 服务器令牌
random: true
required: true
rule: paramComplexity
type: password
label:
en: Server Token
zh: 服务器令牌
ja: サーバートークン
ko: 서버 토큰

View File

@@ -0,0 +1,28 @@
services:
craft-agents:
image: ghcr.io/lukilabs/craft-agents-server:0.8.7
container_name: ${CONTAINER_NAME}
restart: always
networks:
- 1panel-network
ports:
- "${PANEL_APP_PORT_HTTP}:9100"
volumes:
- craft-agents-data:/home/craftagents/.craft-agent
environment:
- TZ=Asia/Shanghai
- CRAFT_SERVER_TOKEN=${CRAFT_SERVER_TOKEN}
- CRAFT_RPC_HOST=0.0.0.0
command:
- bun
- run
- packages/server/src/index.ts
- --allow-insecure-bind
labels:
createdBy: "Apps"
volumes:
craft-agents-data:
driver: local
networks:
1panel-network:
external: true

226
apps/craft-agents/README.md Normal file
View File

@@ -0,0 +1,226 @@
# Craft Agents
Craft Agents 是一个强大的 AI Agent 工作空间,支持多种 LLM 提供商和 MCP 集成。
## 功能特点
- **多 LLM 提供商支持**:支持 Anthropic、Google AI Studio、ChatGPT Plus、GitHub Copilot 等多种 AI 提供商
- **MCP 集成**:支持连接 MCP 服务器、REST API 和本地文件系统
- **多会话管理**:具有收件箱/归档功能,支持会话标记和状态工作流
- **权限模式**:三级权限系统(探索、询问编辑、自动),可自定义规则
- **动态状态系统**:可自定义会话工作流状态(待办、进行中、完成等)
- **自动化**:支持事件驱动的自动化,可基于标签变化、计划任务、工具使用等触发
- **无头服务器模式**:可作为远程服务器运行,桌面应用作为瘦客户端连接
- **Web UI**:内置 Web 界面,可通过浏览器访问和管理
## 使用说明
### 默认端口
- Web 界面/RPC 端口: 9100
### 配置说明
#### 必需参数
- **服务器令牌 (CRAFT_SERVER_TOKEN)**:用于客户端认证的 Bearer 令牌,系统会自动生成格式为 `Craft-Agents-<随机复杂密码>` 的安全令牌,您也可以自定义
#### 可选参数
- **Web 端口 (PANEL_APP_PORT_HTTP)**Web界面访问端口默认为 `9100`
#### 安全说明
⚠️ **重要提示**:本应用默认使用 `--allow-insecure-bind` 参数启动,允许在内网环境下使用非加密的 `ws://` 协议。这适用于以下场景:
-**内网环境**:应用运行在受信任的内网环境中
-**反向代理**:通过 Nginx/Caddy 等反向代理处理 TLS
-**公网直接暴露**:不推荐直接暴露到公网
**生产环境建议**
- 使用反向代理(如 Nginx、Caddy处理 TLS 加密
- 或在容器内配置 TLS 证书(设置环境变量 `CRAFT_RPC_TLS_CERT``CRAFT_RPC_TLS_KEY`),并移除 `--allow-insecure-bind` 参数
**数据存储**应用数据存储在Docker命名卷中由Docker自动管理权限无需手动配置。
### 连接方式
#### 通过 Web UI 访问
部署后,通过浏览器访问 `http://<服务器IP>:9100`,使用设置的服务器令牌登录。
#### 通过桌面应用连接
在 Craft Agents 桌面应用中,配置远程工作空间:
- URL: `ws://<服务器IP>:9100``wss://<服务器IP>:9100`(启用 TLS 时)
- Token: 部署时设置的服务器令牌
### 数据目录
应用数据存储在Docker命名卷 `craft-agents-data` 中,映射到容器内的 `/home/craftagents/.craft-agent` 目录,包括:
- 配置文件
- 会话数据
- 工作空间设置
- 技能和源配置
**数据管理**
- 数据卷由Docker自动管理无需手动设置权限
- 数据会持久化保存,即使容器删除也不会丢失
- 可以通过 `docker volume inspect craft-agents-data` 查看数据位置
### 安全访问方式
根据官方文档,推荐以下几种安全访问方式:
#### 方式1Tailscale推荐
Tailscale 创建设备间的私有网格网络,无需端口转发、证书或防火墙规则。
**优势**
- ✅ 无需配置TLS证书
- ✅ 端到端加密
- ✅ 服务器只能从您的Tailscale网络访问
**配置方法**
```yaml
environment:
- CRAFT_RPC_HOST=100.x.y.z # Tailscale IP
```
#### 方式2反向代理nginx, Caddy
标准的生产部署方式反向代理处理TLS终止和访问控制。
**Caddy 示例**自动HTTPS
```
craft.example.com {
reverse_proxy localhost:9100
}
```
**Nginx 示例**
```nginx
server {
listen 443 ssl;
server_name craft.example.com;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
location / {
proxy_pass http://localhost:9100;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
}
```
使用反向代理时,应用绑定到 localhost
```yaml
environment:
- CRAFT_RPC_HOST=127.0.0.1
```
#### 方式3Cloudflare Tunnel
无需开放端口或管理证书通过HTTPS暴露服务。
**快速隧道**即时HTTPS URL
```bash
cloudflared tunnel --url http://localhost:9100
```
会生成一个 `https://<random>.trycloudflare.com` URL。
**永久自定义域名**
```bash
# 一次性设置
cloudflared tunnel login
cloudflared tunnel create craft-agents
cloudflared tunnel route dns craft-agents agents.yourdomain.com
# 运行隧道
cloudflared tunnel run --url http://localhost:9100 craft-agents
```
#### 方式4SSH隧道
快速临时访问,无需任何设置:
```bash
# 在客户端转发本地端口9100到远程服务器
ssh -L 9100:localhost:9100 user@your-server
```
然后从桌面应用或浏览器连接到 `ws://localhost:9100`
#### 方式5直接配置TLS证书
如需直接在应用中启用TLS需要
1. **生成证书**(参考官方文档):
```bash
# 使用官方脚本生成开发证书
./scripts/generate-dev-cert.sh
```
2. **修改 docker-compose.yml**
```yaml
volumes:
- craft-agents-data:/home/craftagents/.craft-agent
- ./certs:/certs:ro # 挂载证书目录
environment:
- CRAFT_SERVER_TOKEN=${CRAFT_SERVER_TOKEN}
- CRAFT_RPC_HOST=0.0.0.0
- CRAFT_RPC_TLS_CERT=/certs/cert.pem
- CRAFT_RPC_TLS_KEY=/certs/key.pem
```
3. **移除 `--allow-insecure-bind` 参数**
#### 访问方式
- **启用TLS后**
- Web UI: `https://192.168.123.201:9100`
- 桌面客户端: `wss://192.168.123.201:9100`
- **不启用TLS仅内网测试**
- Web UI: `http://192.168.123.201:9100`
- 桌面客户端: 可能无法连接浏览器API限制
**推荐顺序**
1. Tailscale最简单安全
2. 反向代理(标准生产方案)
3. Cloudflare Tunnel无需端口转发
4. SSH隧道临时访问
5. 直接TLS配置不推荐
## 支持的 LLM 提供商
### 直接连接
- **Anthropic**API 密钥或 Claude Max/Pro OAuth
- **Google AI Studio**API 密钥
- **ChatGPT Plus / Pro**Codex OAuth
- **GitHub Copilot**OAuth设备代码
### 第三方提供商
通过自定义端点支持:
- OpenRouter
- Vercel AI Gateway
- Ollama本地模型
- 其他 OpenAI 兼容端点
## 相关链接
- 官方网站: https://agents.craft.do
- GitHub: https://github.com/lukilabs/craft-agents-oss
- 文档: https://github.com/lukilabs/craft-agents-oss#readme
## 许可证
Apache License 2.0

View File

@@ -0,0 +1,31 @@
name: Craft Agents
tags:
- 开发工具
- AI助手
title: AI Agent工作空间支持多LLM提供商和MCP集成
description: AI Agent工作空间支持多LLM提供商和MCP集成
additionalProperties:
key: craft-agents
name: Craft Agents
tags:
- DevTool
- AI
shortDescZh: AI Agent工作空间支持多LLM提供商和MCP集成
shortDescEn: AI Agent workspace with multi-LLM provider support and MCP integration
type: website
crossVersionUpdate: true
limit: 0
recommend: 0
website: https://agents.craft.do
github: https://github.com/lukilabs/craft-agents-oss
document: https://github.com/lukilabs/craft-agents-oss#readme
architectures:
- amd64
- arm64
description:
en: Craft Agents is an AI agent workspace that supports multiple LLM providers (Anthropic, Google AI Studio, ChatGPT Plus, GitHub Copilot) and MCP integration. It features multi-session management, dynamic status workflow, and can run as a headless server.
zh: Craft Agents是一个AI Agent工作空间支持多种LLM提供商Anthropic、Google AI Studio、ChatGPT Plus、GitHub Copilot和MCP集成。它具有多会话管理、动态状态工作流等功能可以作为无头服务器运行。
zh-Hant: Craft Agents是一個AI Agent工作空間支持多種LLM提供商Anthropic、Google AI Studio、ChatGPT Plus、GitHub Copilot和MCP集成。它具有多會話管理、動態狀態工作流等功能可以作為無頭服務器運行。
ja: Craft Agentsは、複数のLLMプロバイダーAnthropic、Google AI Studio、ChatGPT Plus、GitHub CopilotとMCP統合をサポートするAIエージェントワークスペースです。マルチセッション管理、動的ステータスワークフローなどの機能を備え、ヘッドレスサーバーとして実行できます。
ko: Craft Agents는 여러 LLM 제공자(Anthropic, Google AI Studio, ChatGPT Plus, GitHub Copilot)와 MCP 통합을 지원하는 AI 에이전트 워크스페이스입니다. 다중 세션 관리, 동적 상태 워크플로 등의 기능을 갖추고 있으며 헤드리스 서버로 실행할 수 있습니다.
memoryRequired: 512

View File

@@ -0,0 +1,29 @@
additionalProperties:
formFields:
- default: 9100
edit: true
envKey: PANEL_APP_PORT_HTTP
labelEn: Web Port
labelZh: Web端口
required: true
rule: paramPort
type: number
label:
en: Web Port
zh: Web端口
ja: Webポート
ko: Web 포트
- default: "Craft-Agents-"
edit: true
envKey: CRAFT_SERVER_TOKEN
labelEn: Server Token
labelZh: 服务器令牌
random: true
required: true
rule: paramComplexity
type: password
label:
en: Server Token
zh: 服务器令牌
ja: サーバートークン
ko: 서버 토큰

View File

@@ -0,0 +1,28 @@
services:
craft-agents:
image: ghcr.io/lukilabs/craft-agents-server:latest
container_name: ${CONTAINER_NAME}
restart: always
networks:
- 1panel-network
ports:
- "${PANEL_APP_PORT_HTTP}:9100"
volumes:
- craft-agents-data:/home/craftagents/.craft-agent
environment:
- TZ=Asia/Shanghai
- CRAFT_SERVER_TOKEN=${CRAFT_SERVER_TOKEN}
- CRAFT_RPC_HOST=0.0.0.0
command:
- bun
- run
- packages/server/src/index.ts
- --allow-insecure-bind
labels:
createdBy: "Apps"
volumes:
craft-agents-data:
driver: local
networks:
1panel-network:
external: true

BIN
apps/craft-agents/logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

View File

@@ -510,7 +510,7 @@ x-shared-env:
QUEUE_MONITOR_INTERVAL: ${QUEUE_MONITOR_INTERVAL:-30}
services:
api:
image: langgenius/dify-api:1.13.2
image: langgenius/dify-api:1.13.3
env_file:
- dify.env
restart: always
@@ -1041,7 +1041,7 @@ services:
- ssrf_proxy_network
- default
worker:
image: langgenius/dify-api:1.13.2
image: langgenius/dify-api:1.13.3
env_file:
- dify.env
restart: always
@@ -1570,7 +1570,7 @@ services:
- ssrf_proxy_network
- default
web:
image: langgenius/dify-web:1.13.2
image: langgenius/dify-web:1.13.3
container_name: ${CONTAINER_NAME}
env_file:
- dify.env

View File

@@ -11,7 +11,7 @@ services:
APP_SECRET: 52f235dee223c92a83a934ada13b83075c9855fe966b3cbf9dd86810e2b742ee
DATABASE_URL: postgresql://docmost:${PANEL_DB_USER_PASSWORD}@db:5432/docmost?schema=public
REDIS_URL: redis://redis:6379
image: docmost/docmost:0.70.3
image: docmost/docmost:0.71.1
labels:
createdBy: Apps
depends_on:

View File

@@ -1,6 +1,6 @@
services:
easytier:
image: easytier/easytier:v2.5.0
image: easytier/easytier:v2.6.0
container_name: ${CONTAINER_NAME}
restart: always
network_mode: host

View File

@@ -1,6 +1,6 @@
services:
flowise:
image: flowiseai/flowise:3.1.0
image: flowiseai/flowise:3.1.2
container_name: ${CONTAINER_NAME}
restart: always
networks:

View File

@@ -1,6 +1,6 @@
services:
gpt-load:
image: ghcr.io/tbphp/gpt-load:v1.4.4
image: ghcr.io/tbphp/gpt-load:v1.4.6
container_name: ${CONTAINER_NAME}
restart: always
ports:

View File

@@ -1,6 +1,6 @@
services:
gpt4free:
image: hlohaus789/g4f:v7.3.4-slim
image: hlohaus789/g4f:v7.4.7-slim
container_name: ${CONTAINER_NAME}
restart: always
networks:

View File

@@ -1,6 +1,6 @@
services:
gpt4free:
image: hlohaus789/g4f:v7.3.4
image: hlohaus789/g4f:v7.4.7
container_name: ${CONTAINER_NAME}
restart: always
networks:

View File

@@ -1,6 +1,6 @@
services:
inspector:
image: ghcr.io/modelcontextprotocol/inspector:0.21.1
image: ghcr.io/modelcontextprotocol/inspector:0.21.2
container_name: ${CONTAINER_NAME}
restart: always
networks:

Some files were not shown because too many files have changed in this diff Show More