The algorithm will be implemented on brain-inspired computing systems, like the spike-based SpiNNaker (pictured here). SpiNNaker is part of the Human Brain Project’s EBRAINS research infrastructure.
The algorithm will be implemented on brain-inspired computing systems, like the spike-based SpiNNaker (pictured here). SpiNNaker is part of the Human Brain Project's EBRAINS research infrastructure. Forschungszentrum Jülich By Christoph Pelzl - Researchers at TU Graz demonstrate a new design method for particularly energy-saving artificial neural networks that get by with extremely few signals and - similar to Morse code - also assign meaning to the pauses between the signals. Most new achievements in artificial intelligence (AI) require very large neural networks. They consist of hundreds of millions of neurons arranged in several hundred layers, i.e. they have very "deep" network structures. These large, deep neural networks consume a lot of energy in the computer. Those neural networks that are used in image classification (e.g.
UM DIESEN ARTIKEL ZU LESEN, ERSTELLEN SIE IHR KONTO
Und verlängern Sie Ihre Lektüre, kostenlos und unverbindlich.
Ihre Vorteile
- Zugang zu allen Inhalten
- Erhalten Sie Newsmails für Neuigkeiten und Jobs
- Anzeigen veröffentlichen