You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
#912 identified a use case where a very large number of files in S3 needs to be ingested.
While #943 proposes a simple solution for ingesting from S3 via glob patterns, it wouldn't scale to large number of files.
This ticket suggests a more flexible approach.
Using Container source user should be able to enable a special mode that tells Kamu that the STDOUT will be producing not data, but the URLs to files that should be ingested:
fetch:
kind: Containerimage: "ghcr.io/kamu-data/fetch-my-dataset:0.1.0"yields: Urls # Container will be returning URLs to Parquet filesenv:
- name: AWS_API_KEYread:
kind: Parquet
This will allow users to write custom scripts that can list the files more efficiently using the knowledge of their data source structure.
The existing source state mechanism can be reused for such containers to keep track of where they left off.
Note: We further complicate and create more variety in ways to ingest data, which is unfortunate. It would be easy for us to say that user should just "push" data into the dataset when ingest gets complex, but this would not only require some automation on the user's side to get things running (as opposed to Kamu driving the process), but would also expose the user to the complexity of state management and error handling. Thus this extra type of ingest still seems justified.
The text was updated successfully, but these errors were encountered:
#912 identified a use case where a very large number of files in S3 needs to be ingested.
While #943 proposes a simple solution for ingesting from S3 via glob patterns, it wouldn't scale to large number of files.
This ticket suggests a more flexible approach.
Using
Container
source user should be able to enable a special mode that tells Kamu that the STDOUT will be producing not data, but the URLs to files that should be ingested:This will allow users to write custom scripts that can list the files more efficiently using the knowledge of their data source structure.
The existing source state mechanism can be reused for such containers to keep track of where they left off.
Note: We further complicate and create more variety in ways to ingest data, which is unfortunate. It would be easy for us to say that user should just "push" data into the dataset when ingest gets complex, but this would not only require some automation on the user's side to get things running (as opposed to Kamu driving the process), but would also expose the user to the complexity of state management and error handling. Thus this extra type of ingest still seems justified.
The text was updated successfully, but these errors were encountered: