A random “design pattern” which just popped into my head:
Imagine you have some large data structure which is expensive to update. Maybe it’s a list of integers and you want to clamp each element to be between 0 and 100. Normally, you’d go through each element, perform the check and update it if required.
Instead, I suggest keeping the original list and creating a seperate log of the ‘modifications’ which have been made to it. So now you have the original list and a note saying “by the way, when you read an element you also have to clamp it to the range 0-100”. Now you’ve made the update cheap, but accessing the data is more costly.
However, if your program is otherwise idle later on, it can come back and coalesce everything back down.
So you’re basically saving time in your “change it” code, paying it back a little at a time in the “access it” code, and you’ve got the option to pay off the debt entirely when you’ve got a bit of idle time spare.
Whether this is useful or not depends on the frequency and cost of updates and accesses. I’m noting it here as another potential “tool in the box” for algorithmic optimization. It probably lives next to memoization and similar such techniques.