MIT researchers have developed a new data-compression technique that frees up more memory in computers and mobile devices —allowing them to run faster and perform more number of tasks simultaneously.
This data compression releases memory used by redundant data to free up storage capacity and boost computing speeds along with other perks.
Accessing the main memory in current computer systems is quite expensive as compared to actual computation.
Therefore, implementing data compression in the memory makes more sense as it improves performance, reduces the frequency, and also the number of data programs that need to be fetched from the main memory.
Modern computers manage memory and transfer data in chunks of fixed-size where traditional compression techniques operate on them.
But the software doesn’t store its data in fixed-size chunks by default. Instead, it uses “objects” — a type of data structure that stores various types of data and has variable sizes. Here, traditional hardware compression techniques fail to handle objects properly.
This issue can be solved in two ways. The first is adding logic elements to the memory itself so that the most common data processing tasks can take place there.
The second approach is to reduce the amount of data that is accessed repeatedly. It is the second method which inspired the new compression technique at MIT.
This concept can help programmers who use modern programming languages such as Java, Python, and Go; as they store and manage data in objects, without altering their codes.
So, applications would consume less memory and run faster. For end users, it would result in computers that can run much faster or run many more apps at the same speed.
When this compression technique was used in a Java virtual machine, it compressed twice as much data and managed to reduce memory usage by half — proving itself more efficient than traditional cache-based compression.
You can read about the working of the compression technique here in detail.