This is for the computer guys.
I'm overseeing a project to add some major functionality to our SAP system for production planning. Before we turn it on, we have to provide a big chunk of static historical data that it can use as the basis for future calculations. (It's basically fake history so the system can function like it has been implemented forever as soon as it's turned on.)
The software folks have defined 9 data points with an average of 15 months of data each, all for about 800 materials. The kick is that four of those nine are derivative of the other five. Such as one row is monthly returns, and the row below it is an accumulated running total of those returns.
When I was taught programming and database design, we would never have made those calculated fields take up space in the database. We would have calculated them whenever we needed them. I mentioned this to the developer and she said that size doesn't matter and it would be slower that way, so she'd adamant about storing everything in its own table space. These are data points that will be used once or twice a month, at most, and I thought a little slowdown was no big deal. But I guess current best practice is to save everything.
I'm seeing more databases that are not relational with keys, but have every table hold all the data in itself, storing the same thing over and over and over.
These people would never have survived having to write a program that ran with a 64k ceiling.