yetone
3aaf7dad77
feat: tools support copilot ( #1183 )
2025-02-05 23:47:52 +08:00
yetone
1437f319d2
feat: tools ( #1180 )
...
* feat: tools
* feat: claude use tools
* feat: openai use tools
2025-02-05 22:39:54 +08:00
yetone
ef4b6077ec
feat: supports openrouter reasoning ( #1174 )
2025-02-04 01:38:18 +08:00
ken
43269cc07f
Feat: Add Amazon Bedrock provider ( #1167 )
2025-02-03 22:33:25 +08:00
yetone
cd7390de21
fix: remove unnecessary think tag ( #1173 )
2025-02-03 22:32:41 +08:00
yetone
8536d102be
fix: reasoning content processing ( #1171 )
2025-02-03 21:55:12 +08:00
yetone
5ac934f228
chores: remove debug log ( #1160 )
2025-02-02 02:29:41 +08:00
yetone
b5ac768416
feat: supports reasoning_content ( #1159 )
2025-02-02 02:12:14 +08:00
kernitus
499b7a854b
chore: make azure o series models stream
2025-01-29 13:40:43 +08:00
William Heryanto
369410bdb1
fix: Gemini not reaching end state ( #1027 )
2025-01-05 21:11:04 +08:00
hat0uma
0536c6e552
fix(copilot): Prioritize xdg_config
for OAuth token references on Windows ( #1037 )
2025-01-05 20:45:25 +08:00
yetone
3ec847e3cb
fix(ci): lua lint ( #1035 )
2025-01-05 17:11:15 +08:00
Larry Lv
ec5d1abf34
fix(openai): support all o
series models ( #1031 )
...
Before this change, since `max_completion_tokens` was not set for `o` series models, the completion request will time out sometimes. This makes sure it converts the `max_tokens` parameter to `max_completion_tokens` for `o` series models.
I tested this change with `gpt-4o-mini`, `o1-mini` and `o3-mini`, and they all still work as expected.
2025-01-05 13:23:33 +08:00
Sam Jones
9abbec4c5b
fix(copilot): refreshing copilot tokens ( #935 )
...
* fix: wait for github copilot token to refresh before calling completion
* feat: timer to refresh copilot token to prevent 401
2024-12-29 22:58:13 -08:00
kernitus
0d62ffd1cb
fix: azure o1 unsupported options ( #995 )
2024-12-24 22:40:59 +08:00
yetone
01e05a538b
fix: more reasonable error reporting ( #965 )
2024-12-18 23:16:41 +08:00
msvechla
6206998f24
chore: allow to pass raw curl args ( #920 )
...
This can be used to pass additional arguments to curl, which can be
helpful when working on new providers like bedrock, that can use curl
arguments for authorization.
2024-12-04 18:57:07 +08:00
Arkuna
57311bf8cd
fix: Get copilot refresh token asynchronously ( #918 )
2024-12-01 14:00:42 +08:00
Aaron Batilo
e60ccd2db4
feat: enable streaming for o1 models ( #896 )
...
As of a few days ago, o1 models support streaming responses. Please see:
https://community.openai.com/t/openai-o1-streaming-now-available-api-access-for-tiers-1-5/1025430
2024-11-24 17:28:27 +08:00
Shourya Sharma
9d2599df4d
refactor: ♻️ Updated API parsing logic for vertex AI to throw specific error ( #887 )
...
Co-authored-by: Shourya Sharma <shourya.sharma@complyadvantage.com>
2024-11-23 12:47:29 +08:00
yetone
3beed68157
fix: copilot url join ( #871 )
2024-11-19 06:20:42 +08:00
yetone
e65be50a0a
fix: claude parse response ( #870 )
2024-11-19 06:03:03 +08:00
yetone
cf2312abbc
fix: provider must be set ( #868 )
2024-11-19 05:14:04 +08:00
yetone
9891b03656
fix(openai): user and assistant roles should be alternating ( #859 )
2024-11-17 03:49:02 +08:00
yetone
ff85b9c1e2
refactor: remove redundant local field to facilitate provider configuration ( #858 )
2024-11-17 02:55:40 +08:00
yetone
4acdcb6e8b
fix: provider inherited_from ( #857 )
2024-11-17 01:09:33 +08:00
yetone
dfc51b3247
feat: add url_join ( #856 )
2024-11-17 00:39:03 +08:00
yetone
a3e5053d55
fix: preset vendors missing many fields ( #851 )
2024-11-16 02:09:14 +08:00
Shourya Sharma
839a8ee25a
feat: ✨ Added vertex AI provider for orgs using gemini ( #840 )
...
Co-authored-by: Shourya Sharma <shourya.sharma@complyadvantage.com>
2024-11-15 00:34:58 +08:00
insects
ecaf850859
fix(compat): filter out non value and not user message ( #818 )
...
Co-authored-by: Aaron Pham <Aaronpham0103@gmail.com>
Co-authored-by: Aaron Pham <contact@aarnphm.xyz>
2024-11-07 05:38:56 -05:00
insects
ec9b00db8b
fix(openai): add backward compat for get_user_message
( #813 )
...
Co-authored-by: ming.chen <ming.chen@shopee.com>
Co-authored-by: Aaron Pham <Aaronpham0103@gmail.com>
2024-11-07 02:16:19 -05:00
Christopher Brewin
c516883b99
fix(copilot): refresh token before sending the request ( #791 )
...
Co-authored-by: Aaron Pham <contact@aarnphm.xyz>
2024-11-06 00:07:02 -05:00
yetone
1e8abbf798
feat: memory 🧠 ( #793 )
2024-11-04 16:20:28 +08:00
Aaron Pham
5c02a5d846
chore(type): update providers and claude hints ( #766 )
2024-10-27 02:27:10 -04:00
Aaron Pham
bdbbdec88c
feat(tokenizers): support parsing from public URL ( #765 )
2024-10-27 02:17:35 -04:00
Aaron Pham
bd6ce346c1
fix(copilot): cached tokens to avoid rate limits ( closes #557 ) ( #746 )
2024-10-22 04:37:17 -04:00
Aaron Batilo
f92c3a60f3
fix: support legacy finish_reason ( #706 )
...
Many OpenAI compatible alternative servers are still returning a
`finish_reason` of `eos_token` instead of `stop`. This commit adds
support for that to support more of these servers/options.
2024-10-11 21:46:34 +08:00
Sapir Shemer
d74c9d0417
feat: supports openai o1-preview
...
* feat: make O1 models on openai work by handle non-streams & correct
parameters
* chore: set temperature automatically when using o1 models
2024-09-27 21:08:10 +08:00
Agustín Catellani
bcec0fa194
fix: initialize auto suggestions providers ( closes #571 ) ( #576 )
2024-09-15 10:56:18 -04:00
Aaron Pham
2b89f0d529
perf(anthropic): prompt-caching ( #517 )
...
bring back prompt caching support on Anthropic
Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
2024-09-04 03:19:33 -04:00
Aaron Pham
73730513d1
revert(gemini): revert to gpt-4o as tokenizers ( closes #499 ) ( #506 )
...
Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
2024-09-03 22:59:14 -04:00
Aaron Pham
e57a3f27df
chore(provider): use default value set in metaclass ( #503 )
...
Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
2024-09-03 21:56:52 -04:00
Aaron Pham
d7d476ddf5
chore(secrets): support table of string ( #500 )
...
Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
2024-09-03 21:47:01 -04:00
Aaron Pham
0d8098e4eb
fix(style): add parentheses ( #471 )
...
Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
2024-09-03 05:12:07 -04:00
Aaron Pham
e8c71d931e
chore: run stylua [generated] ( #460 )
...
* chore: add stylua
Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
* chore: running stylua
Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
---------
Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
2024-09-03 04:19:54 -04:00
Aaron Pham
4ad913435c
feat(templates): avanterules filetype support ( closes #254 ) ( #466 )
...
Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
2024-09-03 04:09:13 -04:00
Aaron Pham
7266661413
feat(api): enable customizable calls functions ( #457 )
...
Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
2024-09-02 12:22:48 -04:00
Aaron Pham
7912070c6f
fix(gemini): check if json can be decoded ( #446 )
...
Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
2024-09-01 18:47:35 -04:00
Aaron Pham
0557deeab7
feat: tokenizers ( #429 )
...
* feat: tokenizers
This reverts commit d5a4db8321d232a1b9c0d86fc38e8dd516d15776.
* fix(inputs): #422
Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
---------
Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
2024-08-31 13:39:50 -04:00
yetone
c324e902bb
chore: refine code ( #426 )
2024-08-31 23:08:12 +08:00