Skip to content

Releases: run-llama/llama_index

v0.12.8

21 Dec 03:37
4b50ce8
Compare
Choose a tag to compare

v0.12.7

20 Dec 01:00
2677a53
Compare
Choose a tag to compare
v0.12.7

v0.12.6

18 Dec 01:37
6d770ae
Compare
Choose a tag to compare
v0.12.6

v0.12.5

09 Dec 21:31
ae18106
Compare
Choose a tag to compare
v0.12.5

v0.12.4

08 Dec 18:00
37b3403
Compare
Choose a tag to compare
v0.12.4

v0.12.3

06 Dec 04:40
dbd89ab
Compare
Choose a tag to compare
v0.12.3

v0.12.2

26 Nov 19:24
cbf958f
Compare
Choose a tag to compare
v0.12.2

v0.12.1

21 Nov 03:17
3d00f90
Compare
Choose a tag to compare
v0.12.1

2024-11-17 (v0.12.0)

18 Nov 17:44
49416d2
Compare
Choose a tag to compare

NOTE: Updating to v0.12.0 will require bumping every other llama-index-* package! Every package has had a version bump. Only notable changes are below.

llama-index-core [0.12.0]

  • Dropped python3.8 support, Unpinned numpy (#16973)
  • Kg/dynamic pg triplet retrieval limit (#16928)

llama-index-indices-managed-llama-cloud [0.6.1]

  • Add ID support for LlamaCloudIndex & update from_documents logic, modernize apis (#16927)
  • allow skipping waiting for ingestion when uploading file (#16934)
  • add support for files endpoints (#16933)

llama-index-indices-managed-vectara [0.3.0]

  • Add Custom Prompt Parameter (#16976)

llama-index-llms-bedrock [0.3.0]

  • minor fix for messages/completion to prompt (#15729)

llama-index-llms-bedrock-converse [0.4.0]

  • Fix async streaming with bedrock converse (#16942)

llama-index-multi-modal-llms-nvidia [0.2.0]

llama-index-readers-confluence [0.3.0]

  • Permit passing params to Confluence client (#16961)

llama-index-readers-github [0.5.0]

  • Add base URL extraction method to GithubRepositoryReader (#16926)

llama-index-vector-stores-weaviate [1.2.0]

  • Allow passing in Weaviate vector store kwargs (#16954)

v0.11.23

12 Nov 05:05
e4ff8c8
Compare
Choose a tag to compare
v0.11.23