Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cleanup #2

Merged
merged 4 commits into from
Nov 10, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
4 changes: 0 additions & 4 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,10 +1,6 @@
*.csv
*.npy
elk/models/*
elk/trained/*
nohup.out
.idea
*.pkl

# scripts for experiments in progress
my_*.sh
Expand Down
4 changes: 2 additions & 2 deletions MANIFEST.in
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
recursive-include elk/promptsource/templates *
recursive-include elk/resources *
recursive-include ccs/promptsource/templates *
recursive-include ccs/resources *
24 changes: 12 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,52 +21,52 @@ Our code is based on [PyTorch](http://pytorch.org)
and [Huggingface Transformers](https://huggingface.co/docs/transformers/index). We test the code on Python 3.10 and
3.11.

First install the package with `pip install -e .` in the root directory, or `pip install eleuther-elk` to install from PyPi. Use `pip install -e .[dev]` if you'd like to contribute to the project (see **Development** section below). This should install all the necessary dependencies.
First install the package with `pip install -e .` in the root directory. Use `pip install -e .[dev]` if you'd like to contribute to the project (see **Development** section below). This should install all the necessary dependencies.

To fit reporters for the HuggingFace model `model` and dataset `dataset`, just run:

```bash
elk elicit microsoft/deberta-v2-xxlarge-mnli imdb
ccs elicit microsoft/deberta-v2-xxlarge-mnli imdb
```

This will automatically download the model and dataset, run the model and extract the relevant representations if they
aren't cached on disk, fit reporters on them, and save the reporter checkpoints to the `elk-reporters` folder in your
aren't cached on disk, fit reporters on them, and save the reporter checkpoints to the `ccs-reporters` folder in your
home directory. It will also evaluate the reporter classification performance on a held out test set and save it to a
CSV file in the same folder.

The following will generate a CCS (Contrast Consistent Search) reporter instead of the CRC-based reporter, which is the
default.

```bash
elk elicit microsoft/deberta-v2-xxlarge-mnli imdb --net ccs
ccs elicit microsoft/deberta-v2-xxlarge-mnli imdb --net ccs
```

The following command will evaluate the probe from the run naughty-northcutt on the hidden states extracted from the
model deberta-v2-xxlarge-mnli for the imdb dataset. It will result in an `eval.csv` and `cfg.yaml` file, which are
stored under a subfolder in `elk-reporters/naughty-northcutt/transfer_eval`.
stored under a subfolder in `ccs-reporters/naughty-northcutt/transfer_eval`.

```bash
elk eval naughty-northcutt microsoft/deberta-v2-xxlarge-mnli imdb
ccs eval naughty-northcutt microsoft/deberta-v2-xxlarge-mnli imdb
```

The following runs `elicit` on the Cartesian product of the listed models and datasets, storing it in a special folder
ELK_DIR/sweeps/<memorable_name>. Moreover, `--add_pooled` adds an additional dataset that pools all of the datasets
CCS_DIR/sweeps/<memorable_name>. Moreover, `--add_pooled` adds an additional dataset that pools all of the datasets
together. You can also add a `--visualize` flag to visualize the results of the sweep.

```bash
elk sweep --models gpt2-{medium,large,xl} --datasets imdb amazon_polarity --add_pooled
ccs sweep --models gpt2-{medium,large,xl} --datasets imdb amazon_polarity --add_pooled
```

If you just do `elk plot`, it will plot the results from the most recent sweep.
If you just do `ccs plot`, it will plot the results from the most recent sweep.
If you want to plot a specific sweep, you can do so with:

```bash
elk plot {sweep_name}
ccs plot {sweep_name}
```

## Caching

The hidden states resulting from `elk elicit` are cached as a HuggingFace dataset to avoid having to recompute them
The hidden states resulting from `ccs elicit` are cached as a HuggingFace dataset to avoid having to recompute them
every time we want to train a probe. The cache is stored in the same place as all other HuggingFace datasets, which is
usually `~/.cache/huggingface/datasets`.

Expand All @@ -81,7 +81,7 @@ Use `pip install pre-commit && pre-commit install` in the root folder before you
https://img.shields.io/static/v1?label=Remote%20-%20Containers&message=Open&color=blue&logo=visualstudiocode
)
](
https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/EleutherAI/elk
https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/EleutherAI/ccs
)

### Run tests
Expand Down
File renamed without changes.
10 changes: 5 additions & 5 deletions elk/__main__.py → ccs/__main__.py
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
"""Main entry point for `elk`."""
"""Main entry point for `ccs`."""

from dataclasses import dataclass

from simple_parsing import ArgumentParser

from elk.evaluation.evaluate import Eval
from elk.plotting.command import Plot
from elk.training.sweep import Sweep
from elk.training.train import Elicit
from ccs.evaluation.evaluate import Eval
from ccs.plotting.command import Plot
from ccs.training.sweep import Sweep
from ccs.training.train import Elicit


@dataclass
Expand Down
File renamed without changes.
File renamed without changes.
6 changes: 3 additions & 3 deletions elk/evaluation/evaluate.py → ccs/evaluation/evaluate.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
import torch
from simple_parsing.helpers import field

from ..files import elk_reporter_dir
from ..files import ccs_reporter_dir
from ..metrics import evaluate_preds
from ..run import Run
from ..utils import Color
Expand All @@ -22,7 +22,7 @@ class Eval(Run):
def __post_init__(self):
# Set our output directory before super().execute() does
if not self.out_dir:
root = elk_reporter_dir() / self.source
root = ccs_reporter_dir() / self.source
self.out_dir = root / "transfer" / "+".join(self.data.datasets)

def execute(self, highlight_color: Color = "cyan"):
Expand All @@ -36,7 +36,7 @@ def apply_to_layer(
device = self.get_device(devices, world_size)
val_output = self.prepare_data(device, layer, "val")

experiment_dir = elk_reporter_dir() / self.source
experiment_dir = ccs_reporter_dir() / self.source

reporter_path = experiment_dir / "reporters" / f"layer_{layer}.pt"
reporter = torch.load(reporter_path, map_location=device)
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
10 changes: 5 additions & 5 deletions elk/files.py → ccs/files.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,14 +8,14 @@

def sweeps_dir() -> Path:
"""Return the directory where sweeps are stored."""
return elk_reporter_dir() / "sweeps"
return ccs_reporter_dir() / "sweeps"


def elk_reporter_dir() -> Path:
def ccs_reporter_dir() -> Path:
"""Return the directory where reporter checkpoints and logs are stored."""
env_dir = os.environ.get("ELK_DIR", None)
env_dir = os.environ.get("CCS_DIR", None)
if env_dir is None:
log_dir = Path.home() / "elk-reporters"
log_dir = Path.home() / "ccs-reporters"
else:
log_dir = Path(env_dir)

Expand Down Expand Up @@ -47,4 +47,4 @@ def memorably_named_dir(parent: Path):

def transfer_eval_directory(source: str) -> Path:
"""Return the directory where transfer evals are stored."""
return elk_reporter_dir() / source / "transfer_eval"
return ccs_reporter_dir() / source / "transfer_eval"
File renamed without changes.
File renamed without changes.
File renamed without changes.
4 changes: 2 additions & 2 deletions elk/metrics/eval.py → ccs/metrics/eval.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
from dataclasses import asdict, dataclass
from typing import Literal
from typing import Any, Literal

import torch
from einops import repeat
Expand All @@ -26,7 +26,7 @@ class EvalResult:
cal_thresh: float | None
"""The threshold used to compute the calibrated accuracy."""

def to_dict(self, prefix: str = "") -> dict[str, float]:
def to_dict(self, prefix: str = "") -> dict[str, Any]:
"""Convert the result to a dictionary."""
acc_dict = {f"{prefix}acc_{k}": v for k, v in asdict(self.accuracy).items()}
cal_acc_dict = (
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
Loading
Loading