Neural networks were first proposed in 1943 by McCulloch and Pitts to model how neurons in the brain work. Research declined in the 1960s-70s due to limitations of technology and theoretical results suggesting neural networks were not powerful. Interest renewed in the 1980s with the development of backpropagation, which allowed training of multi-layer networks. Breakthroughs in deep learning since 2006 using unsupervised pre-training, GPUs, and large datasets like ImageNet have led to state-of-the-art performance in areas like image recognition and natural language processing.