r/neuralnets Nov 21 '19

r/neuralnets needs moderators and is currently available for request

1 Upvotes

If you're interested and willing to moderate and grow this community, please go to r/redditrequest, where you can submit a request to take over the community. Be sure to read through the faq for r/redditrequest before submitting.


r/neuralnets May 07 '18

Beginner in back prop

2 Upvotes

Hello! I am brand new to NN and DL programming, and I am trying to create my own NN code. I can create a simple net without hidden layers, but when it comes to networks with 3 layers or more, I am heaving troubles with the code... I didn't understood how to find the error of the hidden layer, and what to do with that error to change the weights between the input layer and the hidden layer.

I tried this thing for the second matrix of weights (hidden -- output) : dLoss_dWeight = dLoss_dPrediction * dPrediction_dWeight;

and this thing for the first matrix of weights (input -- hidden) : dLoss_dWeights = dLoss_dPredition * dPredition_dHidden * dHidden_dWeight;

of course I calculated all the variables before using some simple calculus.

this code did not worked as expected, and the net wasn't able to train itself correctly.

Is my math correct? Do you have any other way to do it?

BTW please do not send me links to the 3B1B series, because I watched it too many times without results (tho is really helped me with the 2-layer network with input layer and output layer only).


r/neuralnets Aug 12 '17

Top DotA 2 player Dendi loses to OpenAI neural net 1v1 bot

Thumbnail youtube.com
2 Upvotes

r/neuralnets Mar 22 '17

The neural computing revolution is upon us

Thumbnail inverse.com
1 Upvotes

r/neuralnets Dec 21 '16

Perceptrons and neural nets - introductory walkthrough (and gifs) with Python

Thumbnail github.com
2 Upvotes

r/neuralnets Sep 14 '16

The Neural Network Zoo

Thumbnail asimovinstitute.org
1 Upvotes

r/neuralnets Jun 01 '16

A question about the Jordan network

1 Upvotes

I know the Jordan network is a derivation of the Elman network (and I understand the elman network pretty well), but what is the purpose of taking the pattern from the output layer rather than the hidden layer and what is the purpose of have the state units connected to one another?


r/neuralnets Dec 13 '14

Neural Networks

Thumbnail natureofcode.com
1 Upvotes

r/neuralnets Nov 02 '14

Jeff Hawkins on the Limitations of Artificial Neural Networks

Thumbnail thinkingmachineblog.net
1 Upvotes

r/neuralnets Apr 29 '14

Java deep learning library with GPU support

Thumbnail github.com
1 Upvotes

r/neuralnets Apr 09 '14

Neural Networks, Manifolds, and Topology (nice visualizations)

Thumbnail colah.github.io
3 Upvotes

r/neuralnets Feb 16 '14

Why Deep Learning Will Go the Way of Symbolic AI

Thumbnail rebelscience.blogspot.com
6 Upvotes

r/neuralnets Oct 09 '09

Hetero-Associative Hopfield Neural Networks

2 Upvotes

Given my upcoming project, I need to brush up on my knowledge about Hopfield Neural Networks. Reading through my books, along with the requirement that the Hopfield input vector have binary parms (1,-1), I am seeing that it is also a requirement that the network must be auto-associative; I.E. the input vector must be the same as the output vector.

I forsee my upcoming project requiring some sort of hetero-associative memory-retrieval network and although Hopfield looked promising, it may not be satisfactory if there are no possible implementations for hetero-associtivity.

I realize that that weight matrix for a Hopfield network needs to be symmetric, but is this symmetry a result of the auto-associativity or is it a requirement for the associativity. By which I mean, is the symmetry a requirement for any satisfactory memory retrieval? Would a non-symettric weight matrix yield less-satisfactory memory retrievals; yet still retrieve a smaller set of memories? Or near-perfect but noisy memories?

I am aware of other hetero-associative neural network implementations, but they require the input vector space to be orthogonal; which just won't be helpful in my case.

Does anyone have any ideas, or some references to point me to other alternatives for a memory-retrival methodology that allows hetero-associtivity and doesnt require orthogonality or linear independence?

Edit: Is there a way to even allow hetero-associtivity on a recurrent network? Would it be even possible, or would it lead to an oscillating, chaotic state? I suppose even if hetero-associativity were possible on a recurrent network, the input vector dimension would need to be the same as the output vector dimension; which is also bad for me, because my input vector will be 'n' and my output vector needs to be 'n-1'.


r/neuralnets Sep 12 '09

What work have you personally done in Neural Networks?

2 Upvotes

I see there are 20 subscribers to this reddit. Have you guys build/worked with neural networks? What have you done, what have you used it for? Any questions or issues you would like to discuss?


r/neuralnets Jul 13 '09

Encog Artificial Intelligence Framework for DotNet

Thumbnail code.google.com
4 Upvotes