Have you ever wondered why the Big Data systems, typically batch systems (Hadoop/Hbase, COSMOS store, Google BigTable) that allow Map-Reduce on them have immutability of the data store?
Fault-tolerance and resilience:
If the data is prone to human error, it is better to write data as is. Once you write data as is, you don’t have to worry about it ever being overwritten or lost. It is like a version, which continues to go on forever. But if you keep creating such a huge trail, you create a data explosion. It may not be as bad as you think and to get a coherent view on top of this data is simple.