Skip to content

Commit

Permalink
docs: correct section on llm-ls & add models configuration examples
Browse files Browse the repository at this point in the history
  • Loading branch information
McPatate committed Oct 3, 2023
1 parent 41d8c3b commit 442859e
Showing 1 changed file with 69 additions and 3 deletions.
72 changes: 69 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,13 +43,79 @@ All of the above still applies, but note:

* When an API token is provided, it will be passed as a header: `Authorization: Bearer <api_token>`.

* Instead of setting a Hugging Face model identifier in `model`, set the URL for your HTTP endpoint.
* Instead of setting a Hugging Face model identifier in `model`, set the URL for your HTTP endpoint

### Models

**llm-intellij** is assumed to be compatible with any model that generates code.

Here are some configs for popular models in JSON format that you can put in your Settings (`Cmd+,` > `LLM Settings`)

#### [Starcoder](https://huggingface.co/bigcode/starcoder)

```json
{
"tokensToClear": [
"<|endoftext|>"
],
"fim": {
"enabled": true,
"prefix": "<fim_prefix>",
"middle": "<fim_middle>",
"suffix": "<fim_suffix>"
},
"model": "bigcode/starcoder",
"context_window": 8192,
"tokenizer": {
"repository": "bigcode/starcoder"
}
}
```

> [!NOTE]
> These are the default config values
#### [CodeLlama](https://huggingface.co/codellama/CodeLlama-13b-hf)

```json
{
"tokensToClear": [
"<EOT>"
],
"fim": {
"enabled": true,
"prefix": "<PRE> ",
"middle": " <MID>",
"suffix": " <SUF>"
},
"model": "codellama/CodeLlama-13b-hf",
"context_window": 4096,
"tokenizer": {
"repository": "codellama/CodeLlama-13b-hf"
}
}
```

> [!NOTE]
> Spaces are important here

### [**llm-ls**](https://github.com/huggingface/llm-ls)

By default, **llm-ls** is bundled with **llm-intellij**.
By default, **llm-ls** is installed by **llm-intellij** the first time it is loaded. The binary is downloaded from the [release page](https://github.com/huggingface/llm-ls/releases) and stored in:
```shell
"$HOME/.cache/llm_intellij/bin"
```

When developing locally or if you built your own binary because your platform is not supported, you can set the `llm-ls` > `Binary path` setting to the path of the binary.

`llm-ls` > `Version` is used only when **llm-intellij** downloads **llm-ls** from the release page.

When developing locally or if you built your own binary because your platform is not supported, you can set the `LSP binary path` setting.
You can also set the log level for **llm-ls** with `llm-ls` > `Log level`, which can take any of the usual `info`, `warn`, `error`, etc as a value.
The log file is located in:
```shell
"$HOME/.cache/llm_ls/llm-ls.log"
```

### Tokenizer

Expand Down

0 comments on commit 442859e

Please sign in to comment.