docs: update links to wiki (#104)

This commit is contained in:
Aaron Pham 2024-08-19 19:19:45 -04:00 committed by GitHub
parent 5f74c54e55
commit 2d87dff33b
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

147
README.md
View File

@ -208,152 +208,7 @@ The following key bindings are available for use with `avante.nvim`:
Contributions to avante.nvim are welcome! If you're interested in helping out, please feel free to submit pull requests or open issues. Before contributing, ensure that your code has been thoroughly tested.
## Development
To set up the development environment:
1. Install [StyLua](https://github.com/JohnnyMorganz/StyLua) for Lua code formatting.
2. Install [pre-commit](https://pre-commit.com) for managing and maintaining pre-commit hooks.
3. After cloning the repository, run the following command to set up pre-commit hooks:
```sh
pre-commit install --install-hooks
```
For setting up lua_ls you can use the following for `nvim-lspconfig`:
```lua
lua_ls = {
settings = {
Lua = {
runtime = {
version = "LuaJIT",
special = { reload = "require" },
},
workspace = {
library = {
vim.fn.expand "$VIMRUNTIME/lua",
vim.fn.expand "$VIMRUNTIME/lua/vim/lsp",
vim.fn.stdpath "data" .. "/lazy/lazy.nvim/lua/lazy",
vim.fn.expand "$HOME/path/to/parent" -- parent/avante.nvim
"${3rd}/luv/library",
},
},
},
},
},
```
Then you can set `dev = true` in your `lazy` config for development.
## Custom Providers
To add support for custom providers, one add `AvanteProvider` spec into `opts.vendors`:
```lua
{
provider = "my-custom-provider", -- You can then change this provider here
vendors = {
["my-custom-provider"] = {...}
},
windows = {
wrap_line = true,
width = 30, -- default % based on available width
},
--- @class AvanteConflictUserConfig
diff = {
debug = false,
autojump = true,
---@type string | fun(): any
list_opener = "copen",
},
}
```
A custom provider should following the following spec:
```lua
---@type AvanteProvider
{
endpoint = "https://api.openai.com/v1/chat/completions", -- The full endpoint of the provider
model = "gpt-4o", -- The model name to use with this provider
api_key_name = "OPENAI_API_KEY", -- The name of the environment variable that contains the API key
--- This function below will be used to parse in cURL arguments.
--- It takes in the provider options as the first argument, followed by code_opts retrieved from given buffer.
--- This code_opts include:
--- - question: Input from the users
--- - code_lang: the language of given code buffer
--- - code_content: content of code buffer
--- - selected_code_content: (optional) If given code content is selected in visual mode as context.
---@type fun(opts: AvanteProvider, code_opts: AvantePromptOptions): AvanteCurlOutput
parse_curl_args = function(opts, code_opts) end
--- This function will be used to parse incoming SSE stream
--- It takes in the data stream as the first argument, followed by SSE event state, and opts
--- retrieved from given buffer.
--- This opts include:
--- - on_chunk: (fun(chunk: string): any) this is invoked on parsing correct delta chunk
--- - on_complete: (fun(err: string|nil): any) this is invoked on either complete call or error chunk
---@type fun(data_stream: string, event_state: string, opts: ResponseParser): nil
parse_response_data = function(data_stream, event_state, opts) end
}
```
<details>
<summary>Full working example of perplexity</summary>
```lua
vendors = {
---@type AvanteProvider
perplexity = {
endpoint = "https://api.perplexity.ai/chat/completions",
model = "llama-3.1-sonar-large-128k-online",
api_key_name = "PPLX_API_KEY",
--- this function below will be used to parse in cURL arguments.
parse_curl_args = function(opts, code_opts)
local Llm = require "avante.llm"
return {
url = opts.endpoint,
headers = {
["Accept"] = "application/json",
["Content-Type"] = "application/json",
["Authorization"] = "Bearer " .. os.getenv(opts.api_key_name),
},
body = {
model = opts.model,
messages = Llm.make_openai_message(code_opts), -- you can make your own message, but this is very advanced
temperature = 0,
max_tokens = 8192,
stream = true, -- this will be set by default.
},
}
end,
-- The below function is used if the vendors has specific SSE spec that is not claude or openai.
parse_response_data = function(data_stream, event_state, opts)
local Llm = require "avante.llm"
Llm.parse_openai_response(data_stream, event_state, opts)
end,
},
},
```
</details>
## Local LLM
If you want to use local LLM that has a OpenAI-compatible server, set `["local"] = true`:
```lua
openai = {
endpoint = "http://127.0.0.1:3000",
model = "code-gemma",
temperature = 0,
max_tokens = 4096,
["local"] = true,
},
```
You will be responsible for setting up the server yourself before using Neovim.
See [wiki](https://github.com/yetone/avante.nvim/wiki) for more recipes and tricks.
## License