29 Sep 2024 - by Mohamed Amr
On Archiving
VVire is using Turso for database, which have limitations on both storage and row reading and writing, this feature assumes that the articles will be storage consuming in future because of its encoding nature JSON so, I'm planning to do an archiving flow with this steps:
- Each time an article will be fetched, a background job will occur to update read_at column in articles table.
- A cron job will run later to check for that column's value
- If it's more than a specific value it will then create a file in some file hosting with the JSON content of that article and fill the file specifier into the article entity with a column name pointer (example) and then set the content to null
What if we want it back for multiple requests?
I guess we can use some Redis tricks to track last reads like:
- when an archived article is fetched by user the back-end will store a key in cache article:{id} with a value as [last_read, read_count]
- when the read_count value is big enough, the article will be restored in database making it faster to read it.
Am i over-engineering this point? i hope not