The article here: http://novanoid.github.io/2014/09/26/training-a-neural-network-to-recognize-handwritten-digits/ discusses and implements a way to recognize handwritten digits. For images with a quality of 256 square pixels and an output vector of magnitude 10, the network is constructed quickly and efficiently on a modern machine.
However, for Chinese characters, the output vector has a size over 50,000, although we typically only need to be able to write between 3000 and 8000 characters.
Training a single neural network using the approach from the article seems difficult on machines today. I attempted a neural network that would recognize only 3000 characters and quickly ran into memory limitations.
Is it possible to create multiple smaller networks to recognize sets of characters and then have a mechanism for running all networks simultaneously to parse handwritten characters in real time? What would this approach look like? Or is there a better/different approach to recognize handwritten Chinese characters?