(function(doc, html, url) { var widget = doc.createElement("div"); widget.innerHTML = html; var script = doc.currentScript; // e = a.currentScript; if (!script) { var scripts = doc.scripts; for (var i = 0; i < scripts.length; ++i) { script = scripts[i]; if (script.src && script.src.indexOf(url) != -1) break; } } script.parentElement.replaceChild(widget, script); }(document, '

What is it about?

Focus of this research work is optimizing the deduplication system by adjusting the pertinent factors in content defined chunking (CDC) to identify as the key ingredients by declaring chunk cut-points and efficient fingerprint lookup using bucket based index partitioning. For efficient chunking, proposed Differential Evolution (DE) algorithm based approach is optimized Two Thresholds Two Divisors (TTTD-P) CDC algorithm where significantly it reduces the number of computing operations by using single dynamic optimal parameter divisor D with optimal threshold value exploiting the multi-operations nature of TTTD.

Why is it important?

Experimental results comparative analysis reveal that TTTD-P using fast BUZ rolling hash function with bucket indexing on Hadoop Distributed File System (HDFS) provide a comparatively maximum redundancy detection with higher throughput, higher deduplication ratio, lesser computation time and very low hash values comparison time as being best distributed deduplication for big data storage systems.

Read more on Kudos…
The following have contributed to this page:
Naresh Kumar
' ,"url"));