I've posted several times about mathematical vector engine theano and its benefits.

If you're going to dive deep into neural networks, I recommend learning it and using pure theano. However, there are numerous neural network libraries based on theano, let's list some of them:
  • Theanets, http://theanets.readthedocs.org/ (pay attention - it is different from theanet, which I haven't found useful)
    theanets is a good option to start,  quite efficient and simple, provides also posssibilities for recurrent neural networks. However, notice, that rprop implementation uses mini-batches, which makes it unstable.
  • Keras, http://keras.io/
    so far seems to be very adequate theano library, contains several minibatch-based optimizers and several loss functions, mostly for regression. Authors compare it to Torch.
  • Pylearn2, http://deeplearning.net/software/pylearn2/
    this library was written by LISA-lab, authors of theano, however library though being very advanced, itself is terribly complex. Usually it is easier (at lest for me) to write things from the scratch then writing YAML configuration
  • others which I consider to be not as mature and partially forgotten:
    lasagneblocks, crinodeepANN (last one in deprecated), .
  • finally, want to note my nano-library (500 lines of code!), which allows using 5 trainers + 6 losses on feedforward networks. Supports weights, and extremely flexible, because it's main paradygm: write an expression. Yes, use theano and just write activation function, blackbox optimization methods will do everything for you. This allows writing amazingly complex activations since you're not more restricted to 'layers' model and fitting of arbitrary functions:

    https://github.com/iamfullofspam/hep_ml/blob/master/hep_ml/nnet.py

    One more notable thing: it uses scikit-learn interface, so you can use at as a part of, say,  pipeline. Or run AdaBoost over neural networks (which is very fast,  by the way).