Hi folks. I suppose this is a tech support/homework request. Basically, I'm coding a game, and I want to train a basic neural network to control the enemies in my game, intending to train it on some basic decision making in game, then taking those outputs and just using the neural network for the actual game.
I have the game code already, and that works, including the basic decision making, and I have a basic neural network that works on small amounts of inputs, but it reaches a certain point in amount of data where it can't make decisions anymore. I'm also struggling with having it make predictions based on new inputs.
I'm using python 2.7. My neural network is based on the multi-layer network here: http://iamtrask.github.io/2015/07/12/basic-python-network/
This is an academic exercise, so even if this isn't the most efficient way to make decisions, I'm stuck with using some form of neural network at this point.
Please help? Thanks. I can provide more information as needed.
Just a little nudge here. Could still use some help, though I know it's a longshot.
Can't you just use sklearn?
http://scikit-learn.org/stable/auto_examples/datasets/plot_iris_dataset.html#sphx-glr-auto-examples-datasets-plot-iris-dataset-py
Are you using unsupervised or supervised training?
With a game like chess you can do the former. Just go through thousands of games played in a database (they exist) and calcuate the "score" of a given move.
But with a vidya game, you probably don't have time to train it properly. So you might try a hybrid approach, using a genetic algorithm type tournament among different "A.I.s", muddling with their weights to arrive at something that doesn't suck, then play against, record and train some more..
>>306485
Thanks for responding. I wish, but part of the goal is to implement the architecture myself. I'm also not sure how to go about applying that to my data.
>>306499
Thank you for your response as well. I am kind of trying a hybrid approach, a bit like you suggest, where I gather data on what I want it to do (using some basic if/else logic), then try to get it to extrapolate from that to repeat that, and hopefully be able to learn some in-game. The genetic algorithm idea seems handy, though I'm also not sure exactly how to implement that, though if it turns out to be the easiest choice I can look into it.
What I'm struggling with right now is implementing the thing. It seems like I should gather a lot of data, but if I have a lot of data my data structure thing stops working and the amount of error in the results refuses to go down no matter how many iterations it goes through.
Is there something problematic about the structure in the link in the original post that's making this happen? I'll admit, I may have bitten off a bit more than I can handle with this project and my current understanding of neural networks.
>>306526
How many input and outputs does your NN have? If it has just one hidden layer like the github example, it will have at least n**2 weights, and if there are a lot it will take time to back-propagate (train).
In a turn-by-turn game you could develop a scoring system for various moves, log what the AI does, and do a postmortem at the end of the game to further train the NN, instead of trying to do it while the game is in progress.
>>307088
It currently has 5 inputs and one output, with 4 possible values for the output. It used to have more inputs but I thought maybe that was why it wasn't working, so I simplified it. It's not a turn-based game, it's a top-down dungeon crawler. Also, I'm using the second example neural network shown, if that matters.
Basically I just want the AI to attack, approach, flee, or wander randomly. I'd like if it could control exactly where the enemy goes, but I'm putting that idea on the back-burner until I get just this working.
The main goal is that it somewhat adapts to the player during gameplay, but learns most of it's mannerisms before the game with sample data of what I want it to do.