Skip to content

Commit

Permalink
Add knowledge graph
Browse files Browse the repository at this point in the history
* migrate content from README.md to KG
  • Loading branch information
slimslenderslacks committed Sep 5, 2024
1 parent 0bb48d7 commit e32fc06
Show file tree
Hide file tree
Showing 28 changed files with 3,284 additions and 116 deletions.
117 changes: 1 addition & 116 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,122 +62,7 @@ docker run
See [docs](https://vonwig.github.io/prompts.docs/#/page/running%20the%20prompt%20engine) for more details on how to run the conversation loop,
and especially how to use it to run local prompts that are not yet in GitHub.

## Function volumes

Every function container will have a shared volume mounted into the container at `/thread`.
The volume is intended to be ephemeral and will be deleted at the end of the run. However, the volume
can be saved for inspection by passing the argument `--thread-id`.

## Output json-rpc notifications

Add the flag `--jsonrpc` to the list of arguments to switch the stdout stream to be a series of `jsonrpc` notifications.
This is useful if you are running the tool and streaming responses on to a canvas.

Try running the with the `--jsonrpc` to see a full example but the stdout stream will look something like this.

```
{"jsonrpc":"2.0","method":"message","params":{"content":" consistently"}}Content-Length: 65
{"jsonrpc":"2.0","method":"message","params":{"content":" high"}}Content-Length: 61
{"jsonrpc":"2.0","method":"message","params":{"content":"."}}Content-Length: 52
{"jsonrpc":"2.0","method":"functions","params":null}Content-Length: 57
{"jsonrpc":"2.0","method":"functions-done","params":null}Content-Length: 1703
```

### Notification Methods

#### message

This is a message from the assitant role, or from a tool role.
The params for the `message` method should be appended to the conversation. The `params` can be either
`content` or `debug`.

```json
{"params": {"content": "append this output to the current message"}}
{"params": {"debug": "this is a debug message and should only be shown in debug mode"}}
```

#### prompts

Generated user and system prompts are sent to the client so that they can be displayed. These
are sent after extractors are expanded so that users can see the actual prompts sent to the AI model.

```json
{"params": {"messages": [{"role": "system", "content": "system prompt message"}]}}
```

#### functions

Functions are json encoded strings. When streaming, the content of the json params will change as
the functions streams. This can be rendered in place to show the function definition completing
as it streams.

```json
{"params": "{}"}
```

#### functions-done

This notification is sent when a function definition has stopped streaming, and is now being executed.
The next notification after this will be a tool message.

```json
{"params": ""}
```

#### error

The `error` notification is not a message from the model, prompts, or tools. Instead, it represents a kind
of _system_ error trying to run the conversation loop. It should always be shown to the user as it
probably represents something like a networking error or a configuration problem.

```json
{"params": {"content": "error message"}}
```

### Request Methods

#### prompt

Send a user prompt into the converstation loop. The `prompt` method takes the following `params`.

```json
{"params": {"content": "here is the user prompt"}}
```

## Prompt file layout

Each prompt directory should contain a README.md describing the prompts and their purpose. Each prompt file
is a markdown document that supports moustache templates for subsituting context extracted from the project.

```
prompt_dir/
├── 010_system_prompt.md
├── 020_user_prompt.md
└── README.md
```

* ordering of messages is determined by filename sorting
* the role is encoded in the name of the file

### Moustache Templates

The prompt templates can contain expressions like {{dockerfiles}} to add information
extracted from the current project. Examples of facts that can be added to the
prompts are:

* `{{platform}}` - the platform of the current development environment.
* `{{username}}` - the DockerHub username (and default namespace for image pushes)
* `{{languages}}` - names of languages discovered in the project.
* `{{project.dockerfiles}}` - the relative paths to local DockerFiles
* `{{project.composefiles}}` - the relative paths to local Docker Compose files.

The entire `project-facts` map is also available using dot-syntax
forms like `{{project-facts.project-root-uri}}`. All moustache template
expressions documented [here](https://github.com/yogthos/Selmer) are supported.
[PROMPTS KNOWLEDGE GRAPH](https://vonwig.github.io/prompts.docs/#/page/index)

## Building

Expand Down
Binary file added graphs/prompts/.DS_Store
Binary file not shown.
88 changes: 88 additions & 0 deletions graphs/prompts/journals/2024_09_03.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
- Run some prompts checked in to GitHub against a project in the current working directory.
id:: 66d779c7-c1b7-40c6-a635-fa712da492de
```sh
docker run
--rm -it \
--pull=always \
-v /var/run/docker.sock:/var/run/docker.sock \
--mount type=volume,source=docker-prompts,target=/prompts \
--mount type=bind,source=$HOME/.openai-api-key,target=/root/.openai-api-key \
vonwig/prompts:latest \
run \
--host-dir $PWD \
--user jimclark106 \
--platform darwin \
--prompts "github:docker/labs-make-runbook?ref=main&path=prompts/lazy_docker"
```
- Most of this is boiler plate except:
- the `--user` option in line 10 requires a valid DOCKER_HUB user name
- the `--prompts` option in line 12 requires a valid [github reference]([[GitHub Refs]]) to some markdown prompts
- if the project is located somewhere other than $PWD then the `--host-dir` will need to be updated.
- Run a local prompt markdown file against a project in the current working directory. In this example, the prompts are not pulled from GitHub. Instead, our prompts are being developed in a directory called `$PROMPTS_DIR`. In this example, the local prompt file is `$PROMPTS_DIR/myprompts.md`.
id:: 66d77f1b-1684-480d-ad7b-5e9f53292fe4
```sh
docker run
--rm -it \
--pull=always \
-v /var/run/docker.sock:/var/run/docker.sock \
--mount type=volume,source=docker-prompts,target=/prompts \
--mount type=bind,source=$HOME/.openai-api-key,target=/root/.openai-api-key \
--mount type=bind,source=$PROMPTS_DIR,target=/app/workdir \
--workdir /app/workdir \
vonwig/prompts:latest \
run \
--host-dir $PWD \
--user jimclark106 \
--platform darwin \
--prompts-file myprompts.md
```
- Most of this is boiler plate except:
- the `--user` option in line 12 requires a valid DOCKER_HUB user name
- the `--prompts-file` option in line 14 is a relative path to a prompts file (relative to $PROMPTS_DIR)
- if the project being analyzed is located somewhere other than $PWD then the `--host-dir` will need to be updated.
- [[Running the Prompt Engine]]
- [[Authoring Prompts]]
- Here is a prompt file with lots of non-default metadata (it uses [extractors]([[Prompt Extractors]]), a [[tool]], and uses a local llm in [[ollama]]). It uses one system prompt, and one user prompt. Note that the user prompt contains a moustache template to pull data in from an extractor.
id:: 66d7f3ff-8769-40b3-b6b5-fc4fceea879e

```md
---
extractors:
- name: linguist
image: vonwig/go-linguist:latest
command:
- -json
output-handler: linguist
tools:
- name: findutils-by-name
description: find files in a project by name
parameters:
type: object
properties:
glob:
type: string
description: the glob pattern for files that should be found
container:
image: vonwig/findutils:latest
command:
- find
- .
- -name
model: llama3.1
url: http://localhost/v1/chat/completions
stream: false
---

# Prompt system

You are an expert on analyzing project content.

# Prompt user

{{#linguist}}
This project contains {{language}} code.
{{/linguist}}

Can you find any language specific project files and list them?

```
45 changes: 45 additions & 0 deletions graphs/prompts/journals/2024_09_04.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
- An extractor is a function that runs before prompts are sent to an LLM. It can _extract_ some data from a project directory in order inject context into a set of prompts.
id:: 66d87dd3-efa2-4eb3-ba92-5cc4c2f9700b
- Create a docker container that expects a project bind mounted at `/project` and that writes `application/json` encoded data to `stdout`. The data written to `stdout` is what will be made available to any subsequent prompt templates.
id:: 66d8a36a-432f-4d1a-a48c-edbe0224b182
To test your extractor function, run
```sh
docker run \
--rm -it \
--mount type=bind,source=$PWD,target=/project \
--workdir /project \
image-name:latest arg1 arg2 ....
```
- this would make your current working directory available to the extractor at `/project`
- you can also arrange to pass arguments to the extractor function when you define the extractor metadata
- Once you have defined an extractor image (eg `image-name:latest`), create an entry in the prompt file to reference it.
id:: 66d8a4f3-656d-42bf-b22a-60bba2d1887f
```
---
extractors:
- name: my-extractor
image: image-name:latest
command:
- arg1
- arg2
---
# Prompt user
I can now inject context into the prompt using moustache template syntax.
{{#my-extractor}}
{{.}}
{{/my-extractor}}
```
Read more about [moustache templates](https://mustache.github.io/mustache.5.html)
- #log working on [[Prompt Extractors]]
#log working on [[Authoring Prompts]]
- A very simple prompt file that contains no metadata, and just a single user prompt is
id:: 66d8a396-9268-4917-882f-da4c52b7b5dd
```
# Prompt user
Tell me about Docker!
```
- #log working on [[GitHub Refs]]
-
4 changes: 4 additions & 0 deletions graphs/prompts/journals/2024_09_05.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
- [[conversation loop]]
id:: 66d9d1e0-a13e-4d62-8db7-9eebb37714a8
-
-
Binary file added graphs/prompts/logseq/.DS_Store
Binary file not shown.
Binary file added graphs/prompts/logseq/bak/.DS_Store
Binary file not shown.
Loading

0 comments on commit e32fc06

Please sign in to comment.