Deep learning networks prefer the human voice — just like us

A study proves that AI systems might reach higher levels of performance if they are programmed with sound files of human language rather than with numerical data labels. The researchers discovered that in a side-by-side comparison, a neural network whose ‘training labels’ consisted of sound files reached higher levels of performance in identifying objects in images, compared to another network that had been programmed in a more traditional manner, using simple binary inputs.

Source: sciencedaily.com

Related posts

Scientists discover new T cells and genes related to immune disorders

The dawn of the Antarctic ice sheets

Not so selfish after all: Viruses use freeloading genes as weapons