-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Specify date when download URL expires #11
Comments
When I opened #8 I was thinking more in terms of having a "global" expiry window. For instance, for a given instance of weshare, a file would be automatically deleted after X days (where X is configurable at deployment time). The value of X would then be used to provision an S3 lifecycle policy that automatically deletes objects uploaded after X days. This approach would allow us to keep things simple and not have to introduce a DynamoDB table and a clean-up routine. At the same time, this idea is less flexible for the user that might want different expiry times depending on the type of file... Probably worth reviewing how flexible S3 lifecycle policies can be and if they can be used in combination with files metadata |
Ah! I see. Thanks for clarifying. Some first observations after looking at the AWS S3 console: It looks like you can indeed apply a filter per lifecycle policy based on object tags. However those key-value pairs need to be statically specified when creating a new lifecycle policy, it seems. So while this does not look super flexible, I could at least see an option to have a preset list of expiration dates to choose from. For example: 1 hour, 1 day, 1 week, 1 month. Each of those predefined time periods would be associated with one lifecycle policy. And each uploaded object would need to have an object tag attached to it that corresponds to the key-value pairs set when setting up each lifecycle policy, e.g. Not sure if I'm missing something here, though. Haven't used lifecycle policy filters by object tag, yet. |
I actually really like this idea. I don't think there's a lot of added value in being ultra-specific and allowing the user to configure any arbitrary expiry time per object. I feel the values you suggested (1 hour/day/week/month/year or unlimited) are probably good enough for 99% of the use cases. Thanks for deep diving into this 🙂 |
I think that makes a lot of sense. Another benefit: No need to parse user-provided dates. Instead simple CLI flags could be used to indicate the desired time expiration window. I like it. Perhaps it might be wise to limit (or provide an option to do so) the maximum expiration window. Could be useful to avoid a potentially "ballooning" S3 bucket that only gets larger over time. Im thinking of something like: 1 hour/day/week/month or max, where max could be defined as a config option by the user deploying the service. Does that make sense? |
Yeah, very good point. The default might be a month and you can move it down if needed. I like this idea! |
It might be useful to have an option to specify an expiration date for a download URL, i.e. users would no longer be able to download a file associated with a URL once its date has expired.
Potential benefits
Thoughts on how to implement that
The text was updated successfully, but these errors were encountered: