-
-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IndexError: list index out of range #810
Comments
#I had this issue and was able to work through it. My project can actually be viewed at https://www.groupconscience.ai There are actually multiple posts that I believe are connected to this one. Due to wrestling with it for many hours I do not have all of the steps that were taken but I believe that a significant part of the many errors I had were related to using the nightly distribution of PyTorch and Tensorflow. Here are the install methods used after uninstalling each. Your mileage will likely vary. Note that my configuration is Nvidia RTX4090 and Ubuntu 22.04 LTS. If your on a different GPU this post may be informative but will definitely need adjustments. Tensorflow: Pytorch: I also needed these: I then followed a sequence of installs. The first was creating a list of installation requirements for the python files in the localGPT directory. This included a bunch of packages that are not in the recomended requirements.txt for localGPT. It was obtained by running the following commands from within the localGPT folder. mv requirements.txt requirements_orriginal.txt I then edited the new requirements.txt to remove the duplicate entries where the same package was listed twice with more than one version. I started by keeping the lower versions. I then installed the newly created requirements.txt pip install -r requirements.txt There were still version errors when running the ingest.py and run_localGPT.py. This is where things are a bit grey. I know that I had to downgrade bitsandbytes to 0.42.0 pip install bitsandbytes==0.42.0 I also went back and installed the orrginal requirements.txt to get the known requirements versions correct. pip install -r requirements_orriginal.txt At this point I was able to run ingest.py on my documents without error. In my case I edited the constants.py with the following settings. MODEL_ID = "TheBloke/OpenHermes-2.5-neural-chat-7B-v3-1-7B-GGUF" In the prompt_template_utils.py I changed the prompt to a simple one sentence instruction. I also added a line to make sure that the template would always be the llama style system_prompt = """You are Alex an honest insightful empathetic AA mentor that exists to alcoholics stay sober and always responds.\n""" Notice the prompt_template_type = 'llama' That is pretty much it. Hope this is helpful for someone and contributes to the livelihood of PromptEngineer. As a last effort to be helpful here is a listing of all of my installed packages as they are in my current functional environment. packages in environment at ~/miniconda3/envs/localGPT: |
Here is a requirements list of the pypi packages that can be copy pasted into a requrements.txt file and installed using pip install -r requirements.txt aiosignal==1.3.1 |
hello, i have 15 cpu, 30gb ram server
when i am trying to run ingest.py cpu mod, i am reciving error like this
in source_documents i have 590 pdf documents (60 mb)
The text was updated successfully, but these errors were encountered: