You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I understand that LongRAG extracts articles from Wikipedia XML dump files and stores them in multiple files, each of which contains multiple documents in XML or JSON format. LongRAG splits Wikipedia documents into larger retrieval units through a set of indexing processes designed by itself, thereby achieving more efficient retrieval and answer extraction. But I want to use LongRAG to build an index for a single PDF file and test it. Is this possible?
The text was updated successfully, but these errors were encountered:
I understand that LongRAG extracts articles from Wikipedia XML dump files and stores them in multiple files, each of which contains multiple documents in XML or JSON format. LongRAG splits Wikipedia documents into larger retrieval units through a set of indexing processes designed by itself, thereby achieving more efficient retrieval and answer extraction. But I want to use LongRAG to build an index for a single PDF file and test it. Is this possible?
The text was updated successfully, but these errors were encountered: