Pocket brains: Neuromorphic hardware arrives for our brain-inspired algorithms

(credit: Miguel Navarro / Getty Images)

As the world’s great companies pursue autonomous cars, they’re essentially spending billions of dollars to get machines to do what your average two-year-old can do without thinking—identify what they see. Of course, in some regards toddlers still have the advantage. Infamously last year, a driver died while in a Tesla sedan—he wasn’t paying attention when the vehicle’s camera mistook a nearby truck for the sky.

The degree to which these companies have had success so far is because of a long-dormant form of computation that models certain aspects of the brain. However, this form of computation pushes current hardware to its limits, since modern computers operate very differently from the gray matter in our heads. So, as programmers create “neural network” software to run on regular computer chips, engineers are also designing “neuromorphic” hardware that can imitate the brain more efficiently. Sadly, one type of neural net that has become the standard in image recognition and other tasks, something called a convolutional neural net, or CNN, has resisted replication in neuromorphic hardware.

That is, until recently.

Read 38 remaining paragraphs | Comments

Ars Technica

Post Author: martin

Martin is an enthusiastic programmer, a webdeveloper and a young entrepreneur. He is intereted into computers for a long time. In the age of 10 he has programmed his first website and since then he has been working on web technologies until now. He is the Founder and Editor-in-Chief of BriefNews.eu and PCHealthBoost.info Online Magazines. His colleagues appreciate him as a passionate workhorse, a fan of new technologies, an eternal optimist and a dreamer, but especially the soul of the team for whom he can do anything in the world.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.