Skip to content

smartarch/extremexp-llm

Repository files navigation

extremexp-llm

Full list of evaluation patterns: patterns.md

Installation

  1. (Optional): create virtual environment: python3 -m venv .venv and activate it: source .venv/bin/activate
  2. install requirements: pip install -r requirements.txt

To use OpenAI (paid API):

LLM Evaluation

The agent_evaluation folder contains sample test instances for evaluating a LLM-based Agent. The instances are manually created based on the test instance patterns. The evaluation can be run via the main.py file.

In the agent_evaluation/README.md file, the experimental results are summarized and discussed.

XXP Agent Chat

The xxp_agent folder contains code for running a LLM-based agent that chats with the user. Based on the configuration (see examples in the examples folder) the available tools are selected (so the agent can read workflow specifications, data produced by experiments, ...). To start the agent, run the main.py file.


Simple chat with an LLM-based agent. The tools are not implemented, they are instead redirected to the human input. The input is multiline, Press Ctrl+D (Linux) or Ctrl+Z (Windows) on an empty line to end it. To close the chat, end the script (Ctrl+C).

Logs of the conversation are stored in agent_with_fake_tools_logs (not stored in git). The logs contain ANSI formatting characters for colors, to read them properly, either cat them or use a VS Code extension.

IDEKO AutoML example

Partial implementation of AutoML workflow. The goal is to try several ML models. The results of this workflow can then be consulted with the LLM agent to choose the best model.

  1. navigate to examples/ideko: cd examples/ideko
  2. install requirements: pip install -r requirements.txt
  3. rename .env.example to .env and set the IDEKO_DATA_FOLDER (obtain the data from GitLab)
  4. run extract_ideko_data.py to extract the data from ZIP files and also extract only a subset of features from the data (this is to simplify and speedup the AutoML, in a real ExtremeXP experiment, this extraction might be part of the experiment workflow)
  5. run automl.py to perform the AutoML workflow and obtain results

Machine Predictive Maintenance Classification AutoML

Similar use-case to IDEKO but with a better dataset. The dataset is obtained from Kaggle (with CC0 licence).

  1. navigate to examples/predictive_maintenance: cd examples/predictive_maintenance
  2. install requirements: pip install -r requirements.txt
  3. rename .env.example to .env (set the variables if needed)
  4. run automl.py to perform the AutoML workflow and obtain results

How to use local models (not used anymore)

To use Ollama (local free models):

  • curl -fsSL https://ollama.com/install.sh | sh
  • Fetch a model: ollama pull llama2 (3.8GB)

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages