parallel computingShort Bytes: Even though parallel programming is known for its speed and efficiency, it’s not hidden that it brings along complexity in the code. To tackle this problem, some MIT researchers have teamed to create a new chip design named Swarm.

Before telling you about an important advancement that could make parallel programming a less tedious job, I would like to tell the beginners about its basics. Parallel programming is a method of performing multiple computations simultaneously. This operates on the principle that a large problem can be divided into smaller fragments, which are then solved together using 2 or more processors.

If you apply your theoretical knowledge, you might think that a multicore machine–let’s say n-core–will be n times faster than a single core machine. However, as a large chunk of computer programs are sequential, breaking them to use parallel computing concept is a tiresome process.

However, the latest developments made at the MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) can change this scenario.

The researchers have created a new chip design named Swarm, which lets them make parallel programs easier to write and operate with more efficiency.

In their tests, the researchers compared the existing parallel programming models with their Swarm versions. They found that Swarm versions were about 18 times faster. Surprisingly, the newly engineered model needed just 10% of the code.

Also, Swarm design was able to speed up one program–that computer scientists had previously failed to parallelise–by a factor of 75.

Explaining how multicore systems are harder to program, Daniel Sanchez, an MIT assistant professor says:

You have to explicitly divide the work that you’re doing into tasks, then you need to enforce some synchronisation between tasks accessing shared data. What this architecture does, essentially, is to remove all sorts of explicit synchronisation to make parallel programming easier.

Compared to the usual multicore chips, Swarm features extra circuits to handle prioritization. The work is performed according to priority and the task with the highest priority gets executed first. These high priority tasks have their low priority task–something that’s automatically slotted into the queue by Swarm.

Writing a program with Swarm functionality is equally easier. When a programmer defines a function, he/she just needs to add a line of code that loads that function into Swarm’s queue.

With inputs from ScienceDaily

Did you find this article helpful? Don’t forget to drop your feedback in the comments section below.