---------[ MNIST Convolutional Network Details ]--------- Hyperparamters: Loss: Cross Entropy Learning Rate: 0.0005 Momentum: 90% Decay: 1e-07 Max Mini-batch Size: 200 Node Layer Count: 6 784 x 8112 x 5808 x 5184 x 500 x 10 Convolutional Layer: Dimensions: 28*28*1 [784] x 13*13*48 [8112] Filter Size: 4 Filter Stride: 2 Filter Count: 48 Activation: ReLU Convolutional Layer: Dimensions: 13*13*48 [8112] x 11*11*48 [5808] Filter Size: 3 Filter Stride: 1 Filter Count: 48 Activation: ReLU Convolutional Layer: Dimensions: 11*11*48 [5808] x 9*9*64 [5184] Filter Size: 3 Filter Stride: 1 Filter Count: 64 Activation: ReLU Dense Layer: Dimensions: 5184 x 500 Activation: TanH Dense Layer: Dimensions: 500 x 10 Activation: Softmax This network was ran on 75 iterations over the MNIST digit dataset. Where each iteration was 1 epoch. ---------[ MNIST Convolutional Network Details ]--------- ------------[ MNIST Dense Network Details ]------------ Hyperparamters: Loss: Cross Entropy Learning Rate: 0.001 Momentum: 0% Decay: 0 Max Mini-batch Size: 200 Node Layer Count: 6 784 x 600 x 500 x 400 x 300 x 200 x 10 Dense Layer: Dimensions: 784 x 600 Activation: Sigmoid Dense Layer: Dimensions: 600 x 500 Activation: Sigmoid Dense Layer: Dimensions: 500 x 400 Activation: Sigmoid Dense Layer: Dimensions: 400 x 300 Activation: Sigmoid Dense Layer: Dimensions: 300 x 200 Activation: Sigmoid Dense Layer: Dimensions: 200 x 10 Activation: Softmax This network was ran on 150 iterations over the MNIST digit dataset. Where each iteration was 1 epoch. ------------[ MNIST Dense Network Details ]------------ ----------[ Binary Counting Network Details ]---------- Hyperparamters: Loss: Mean Squared Error Learning Rate: 0.1 Momentum: 0% Decay: 0 Max Batch Size: 1 Node Layer Count: 4 5 x 8 x 8 x 5 Dense Layer: Dimensions: 5 x 8 Activation: Sigmoid Dense Layer: Dimensions: 8 x 8 Activation: Sigmoid Dense Layer: Dimensions: 8 x 5 Activation: Sigmoid This network was ran on 9000 iterations over the dataset of 5 bit binary numbers. Where each iteration was 1 epoch. ----------[ Binary Counting Network Details ]---------- -------[ Hamlet Recurrent Neural Network Details ]------- Hyperparamters: Loss: Cross Entropy Learning Rate: 0.1 Momentum: 0% Decay: 0 Max Batch Size: 1 Truncation (TBPTT): 12 Node Layer Count: 4 39 x [400] x 400 x 39 Recurrent Layer: Dimensions: (39 + [400]t-1) x [400]t Activation: Sigmoid Dense Layer: Dimensions: [400]t x 400 Activation: Sigmoid Dense Layer: Dimensions: 400 x 39 Activation: Softmax This network was ran on 25 iterations over the dataset containing the entirety of Shakespere's Hamlet. Where each iteration was 1 epoch. -------[ Hamlet Recurrent Neural Network Details ]-------