Not too big: Machine learning tames huge data sets

A machine-learning algorithm demonstrated the capability to process data that exceeds a computer’s available memory by identifying a massive data set’s key features and dividing them into manageable batches that don’t choke computer hardware. The algorithm set a world record for factorizing huge data sets during a test run on the world’s fifth-fastest supercomputer. Equally efficient on laptops and supercomputers, the highly scalable algorithm solves hardware bottlenecks that prevent processing information from data-rich applications in cancer research, satellite imagery, social media networks, national security science and earthquake research, to name just a few.

Source: sciencedaily.com

Related posts

Better medical record-keeping needed to fight antibiotic overuse

Global life expectancy to increase by nearly 5 years by 2050 despite geopolitical, metabolic, and environmental threats

Modern plant enzyme partners with surprisingly ancient protein