Inteligencia artificial - Clase TPerceptron

Re: Inteligencia artificial - Clase TPerceptron

Postby Carles » Thu May 18, 2017 4:42 pm

Hola,

Quizas si tomamos esta la clase TPerceptron como si fuera una neurona y esta va aprendiendo, podriamos enfocar un patron de tipo singleton para su diseño y asi por ejemplo la data aWeight (peso) fuera de tipo classdata. Lo interesante es que cualquier nuevo objeto que se fuera creando en el ciclo del programa se crearia ya con la neurona "aprendida" y mas lista ayudando a sumar mas pesos y que se beneficien asi todos los objetos creados de esta clase... Joe no se si me he explicado bien :roll:

Hoy ya es mas interesante... :)


Saludetes...
Salutacions, saludos, regards

"...programar es fácil, hacer programas es difícil..."

UT Page -> https://carles9000.github.io/
Forum UT -> https://discord.gg/bq8a9yGMWh
Skype -> https://join.skype.com/cnzQg3Kr1dnk
User avatar
Carles
 
Posts: 1138
Joined: Fri Feb 10, 2006 2:34 pm
Location: Barcelona

Re: Inteligencia artificial - Clase TPerceptron

Postby Antonio Linares » Fri May 19, 2017 10:42 am

Charly,

muy bien :-)

Aqui resumo unas ideas sencillas al respecto:
viewtopic.php?p=201907#p201907

Si quereis lo traduzco
regards, saludos

Antonio Linares
www.fivetechsoft.com
User avatar
Antonio Linares
Site Admin
 
Posts: 42118
Joined: Thu Oct 06, 2005 5:47 pm
Location: Spain

Re: Inteligencia artificial - Clase TPerceptron

Postby xmanuel » Fri May 19, 2017 6:42 pm

...y si a la idea de Charly le añadimos que el objeto sea persistente la sabiduría del perceptron llegaría a límites insospechados.
La persistencia se podrá conseguir con la serialización del objeto o en una simple DBF en la que se guardara el estado del objeto, o sea, los valores de las datas en un determinado momento. :D
______________________________________________________________________________
Sevilla - Andalucía
xmanuel
 
Posts: 762
Joined: Sun Jun 15, 2008 7:47 pm
Location: Sevilla

Re: Inteligencia artificial - Clase TPerceptron

Postby xmanuel » Fri May 19, 2017 6:43 pm

Por favor Antonio, ya estás tardando :lol: :lol: :lol:
______________________________________________________________________________
Sevilla - Andalucía
xmanuel
 
Posts: 762
Joined: Sun Jun 15, 2008 7:47 pm
Location: Sevilla

Re: Inteligencia artificial - Clase TPerceptron

Postby Antonio Linares » Fri May 19, 2017 6:59 pm

Aquí está la traducción :

Pedro Domingos los llama "learners" ("aprendedores"): software que "aprende" de los datos.

La forma más simple de aprender de los datos es comparar dos bytes. Cómo ? Restándolos: Un cero significa que son iguales, y si el resultado es diferente de cero, entonces es que son diferentes. La diferencia entre ambos valores es el "error". Para corregir el error, modificamos un "peso". Es sorprendente todo lo que se puede llegar a construir con este simple concepto. De la misma forma que toda la tecnología del software se basa en el bit, que puede ser cero ó uno.

El perceptrón imita (de una forma muy simple) el comportamiento de una neurona del cerebro. La neurona recibe varios "inputs", aplicándoles un "peso" (almacenado en la neurona) a cada input y la suma de todos esos inputs multiplicados por sus pesos, genera una salida (uno ó cero), y este valor se propaga a otras neuronas.

Gracias a la "backpropagation" se afinan esos pesos y finalmente el perceptrón se "ajusta" al peso correcto para cada input para generar la salida (output) esperada.

La inteligencia artificial ya se está usando en muchos sectores y cambiará mucho nuestras vidas y la forma en la que se desarrolla el software
regards, saludos

Antonio Linares
www.fivetechsoft.com
User avatar
Antonio Linares
Site Admin
 
Posts: 42118
Joined: Thu Oct 06, 2005 5:47 pm
Location: Spain

Re: Inteligencia artificial - Clase TPerceptron

Postby Antonio Linares » Tue May 23, 2017 8:48 am

Image
regards, saludos

Antonio Linares
www.fivetechsoft.com
User avatar
Antonio Linares
Site Admin
 
Posts: 42118
Joined: Thu Oct 06, 2005 5:47 pm
Location: Spain

Re: Inteligencia artificial - Clase TPerceptron

Postby Antonio Linares » Tue May 23, 2017 9:34 am

Image

Perceptrón Multicapa
regards, saludos

Antonio Linares
www.fivetechsoft.com
User avatar
Antonio Linares
Site Admin
 
Posts: 42118
Joined: Thu Oct 06, 2005 5:47 pm
Location: Spain

Re: Inteligencia artificial - Clase TPerceptron

Postby Antonio Linares » Fri May 26, 2017 6:34 pm

Code: Select all  Expand view  RUN
/*
 * backprop.c
 * Backpropagation neural network library.
 *
 * 2016, December 13 - fixed bkp_loadfromfile. Changed the file format
 * to include a file type, 'A', and the network type. Updated
 * bkp_savetofile to match.
 * 2016, April 7 - made bkp_query return BiasVals, BHWeights and BIWeights
 * 2016, April 3 - cleaned up version for website publication
 * 1992 - originally written around this time
 * A note of credit:
 * This code had its origins as code obtained back around 1992 by sending
 * a floppy disk to The Amateur Scientist, Scientific American magazine.
 * I've since modified and added to it a great deal, and it's even on
 * its 3rd OS (MS-DOS -> QNX -> Windows). As I no longer have the
 * original I can't know how much is left to give credit for.
 */

#define CMDIFFSTEPSIZE   1 /* set to 1 for Chen & Mars differential step size */
#define DYNAMIC_LEARNING 0 /* set to 1 for Dynamic Learning */
#include <errno.h>
#include <fcntl.h>
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <sys/stat.h>
#include <sys/types.h>
#include "backprop.h"

static void bkp_setup_all(bkp_network_t *n);
static void bkp_forward(bkp_network_t *n);
static void bkp_backward(bkp_network_t *n);

/* The following sigmoid returns values from 0.0 to 1.0 */
#define sigmoid(x)           (1.0 / (1.0 + (float)exp(-(double)(x))))
#define sigmoidDerivative(x) ((float)(x)*(1.0-(x)))
/* random() for -1 to +1 */
#define random()             ((((float)rand()/(RAND_MAX)) * 2.0) - 1.0)
/* random() for -0.5 to +0.5
#define random()             (((float)rand()/(RAND_MAX)) - 0.5)
*/


/*
 * bkp_create_network - Create a new network with the given configuration.
 * Returns a pointer to the new network in 'n'.
 *
 * Return Value:
 * int  0: Success
 *     -1: Error, errno is set to:
 *         ENOMEM - out of memory
 *         EINVAL - config.Type is one which this server does not handle
 */

int bkp_create_network(bkp_network_t **n, bkp_config_t *config)
{
   if (config->Type != BACKPROP_TYPE_NORMAL) {
      errno = EINVAL;
      return -1;
   }

   if ((*n = (bkp_network_t *) malloc(sizeof(bkp_network_t))) == NULL) {
      errno = ENOMEM;
      return -1;
   }
   
   (*n)->NumInputs = config->NumInputs;
   (*n)->NumHidden = config->NumHidden;
   (*n)->NumOutputs = config->NumOutputs;
   (*n)->NumConsecConverged = 0;
   (*n)->Epoch = (*n)->LastRMSError = (*n)->RMSSquareOfOutputBetas = 0.0;
   (*n)->NumBias = 1;
   if (config->StepSize == 0)
      (*n)->StepSize = 0.5;
   else
      (*n)->StepSize = config->StepSize;
#if CMDIFFSTEPSIZE
   (*n)->HStepSize = 0.1 * (*n)->StepSize;
#endif
   if (config->Momentum == -1)
      (*n)->Momentum = 0.5;
   else
      (*n)->Momentum = config->Momentum;
   (*n)->Cost = config->Cost;
   if (((*n)->GivenInputVals = (float *) malloc((*n)->NumInputs * sizeof(float))) == NULL)
      goto memerrorout;
   if (((*n)->GivenDesiredOutputVals = (float *) malloc((*n)->NumOutputs * sizeof(float))) == NULL)
      goto memerrorout;
   if (((*n)->IHWeights = (float *) malloc((*n)->NumInputs * (*n)->NumHidden * sizeof(float))) == NULL)
      goto memerrorout;
   if (((*n)->PrevDeltaIH = (float *) malloc((*n)->NumInputs * (*n)->NumHidden * sizeof(float))) == NULL)
      goto memerrorout;
   if (((*n)->PrevDeltaHO = (float *) malloc((*n)->NumHidden * (*n)->NumOutputs * sizeof(float))) == NULL)
      goto memerrorout;
   if (((*n)->PrevDeltaBH = (float *) malloc((*n)->NumBias * (*n)->NumHidden * sizeof(float))) == NULL)
      goto memerrorout;
   if (((*n)->PrevDeltaBO = (float *) malloc((*n)->NumBias * (*n)->NumOutputs * sizeof(float))) == NULL)
      goto memerrorout;
   if (((*n)->HiddenVals = (float *) malloc((*n)->NumHidden * sizeof(float))) == NULL)
      goto memerrorout;
   if (((*n)->HiddenBetas = (float *) malloc((*n)->NumHidden * sizeof(float))) == NULL)
      goto memerrorout;
   if (((*n)->HOWeights = (float *) malloc((*n)->NumHidden * (*n)->NumOutputs * sizeof(float))) == NULL)
      goto memerrorout;
   if (((*n)->BiasVals = (float *) malloc((*n)->NumBias * sizeof(float))) == NULL)
      goto memerrorout;
   if (((*n)->BHWeights = (float *) malloc((*n)->NumBias * (*n)->NumHidden * sizeof(float))) == NULL)
      goto memerrorout;
   if (((*n)->BOWeights = (float *) malloc((*n)->NumBias * (*n)->NumOutputs * sizeof(float))) == NULL)
      goto memerrorout;
   if (((*n)->OutputVals = (float *) malloc((*n)->NumOutputs * sizeof(float))) == NULL)
      goto memerrorout;
   if (((*n)->OutputBetas = (float *) malloc((*n)->NumOutputs * sizeof(float))) == NULL)
      goto memerrorout;
   bkp_setup_all(*n);
   return 0;
   
memerrorout:
   bkp_destroy_network(*n);
   errno = ENOMEM;
   return -1;
}

/*
 * bkp_destroy_network - Frees up any resources allocated for the
 * given neural network.
 *
 * Return Values:
 *    (none)
 */

void bkp_destroy_network(bkp_network_t *n)
{
   if (n == NULL)
      return;

   if (n->GivenInputVals == NULL) return;
   bkp_clear_training_set(n);
   free(n->GivenInputVals);
   if (n->GivenDesiredOutputVals != NULL) { free(n->GivenDesiredOutputVals); n->GivenDesiredOutputVals = NULL; }
   if (n->IHWeights != NULL) { free(n->IHWeights); n->IHWeights = NULL; }
   if (n->PrevDeltaIH != NULL) { free(n->PrevDeltaIH); n->PrevDeltaIH = NULL; }
   if (n->PrevDeltaHO != NULL) { free(n->PrevDeltaHO); n->PrevDeltaHO = NULL; }
   if (n->PrevDeltaBH != NULL) { free(n->PrevDeltaBH); n->PrevDeltaBH = NULL; }
   if (n->PrevDeltaBO != NULL) { free(n->PrevDeltaBO); n->PrevDeltaBO = NULL; }
   if (n->HiddenVals != NULL) { free(n->HiddenVals); n->HiddenVals = NULL; }
   if (n->HiddenBetas != NULL) { free(n->HiddenBetas); n->HiddenBetas = NULL; }
   if (n->HOWeights != NULL) { free(n->HOWeights); n->HOWeights = NULL; }
   if (n->BiasVals != NULL) { free(n->BiasVals); n->BiasVals = NULL; }
   if (n->BHWeights != NULL) { free(n->BHWeights); n->BHWeights = NULL; }
   if (n->BOWeights != NULL) { free(n->BOWeights); n->BOWeights = NULL; }
   if (n->OutputVals != NULL) { free(n->OutputVals); n->OutputVals = NULL; }
   if (n->OutputBetas != NULL) { free(n->OutputBetas); n->OutputBetas = NULL; }
   n->GivenInputVals = NULL;
   free(n);
}

/*
 * bkp_set_training_set - Gives addresses of the input and target data
 * in the form of a input values and output values. No data is copied
 * so do not destroy the originals until you call
 * bkp_clear_training_set(), or bkp_destroy_network().
 *
 * Return Values:
 * int 0: Success
 *    -1: Error, errno is:
 *        ENOENT if no bkp_create_network() has been done yet.
 */

int bkp_set_training_set(bkp_network_t *n, int ntrainset, float *tinputvals, float *targetvals)
{
   if (!n) {
      errno = ENOENT;
      return -1;
   }

   bkp_clear_training_set(n);

   n->NumInTrainSet = ntrainset;
   n->TrainSetInputVals = tinputvals;
   n->TrainSetDesiredOutputVals = targetvals;
   
   return 0;
}

/*
 * bkp_clear_training_set - Invalidates the training set such that
 * you can no longer use it for training. After this you can free
 * up any memory associated with the training data you'd passed to
 * bkp_set_training_set(). It has not been modified in any way.
 *
 * Return Values:
 *    (none)
 */

void bkp_clear_training_set(bkp_network_t *n)
{
   if (n->NumInTrainSet > 0) {
      n->TrainSetInputVals = NULL;
      n->TrainSetDesiredOutputVals = NULL;
      n->NumInTrainSet = 0;
   }
}

static void bkp_setup_all(bkp_network_t *n)
{
   int i, h, o, b;
   
   n->InputReady = n->DesiredOutputReady = n->Learned = 0;

   n->LearningError = 0.0;
   
   for (i = 0;  i < n->NumInputs;  i++)
      n->GivenInputVals[i] = 0.0;
   
   for(h = 0;  h < n->NumHidden;  h++) {
      n->HiddenVals[h] = 0.0;
      for (i = 0;  i < n->NumInputs;  i++) {
         n->IHWeights[i+(h*n->NumInputs)] = random();
         n->PrevDeltaIH[i+(h*n->NumInputs)] = 0.0;
      }
      for (b = 0;  b < n->NumBias;  b++) {
         n->BHWeights[b+(h*n->NumBias)] = random();
         n->PrevDeltaBH[b+(h*n->NumBias)] = 0.0;
      }
   }

   for(o = 0;  o < n->NumOutputs;  o++) {
      n->OutputVals[o] = 0.0;
      for (h = 0;  h < n->NumHidden;  h++) {
         n->HOWeights[h+(o*n->NumHidden)] = random();
         n->PrevDeltaHO[h+(o*n->NumHidden)] = 0.0;
      }
      for (b = 0;  b < n->NumBias;  b++) {
         n->BOWeights[b+(o*n->NumBias)] = random();
         n->PrevDeltaBO[b+(o*n->NumBias)] = 0.0;
      }
   }
   
   for (b = 0;  b < n->NumBias;  b++)
      n->BiasVals[b] = 1.0;
}

/*
 * bkp_learn - Tells backprop to learn the current training set ntimes.
 * The training set must already have been set by calling
 * bkp_set_training_set(). This does not return until the training
 * has been completed. You can call bkp_query() after this to find out
 * the results of the learning.
 *
 * Return Values:
 * int 0: Success
 *    -1: Error, errno is:
 *        ENOENT if no bkp_create_network() has been done yet.
 *        ESRCH if no bkp_set_training_set() has been done yet.
 */

int bkp_learn(bkp_network_t *n, int ntimes)
{
   int item, run;
   
   if (!n) {
      errno = ENOENT;
      return -1;
   }
   if (n->NumInTrainSet == 0) {
      errno = ESRCH;
      return -1;
   }

   for (run = 0;  run < ntimes;  run++) {
      for (item = 0;  item < n->NumInTrainSet;  item++) {
         /* set up for the given set item */
         n->InputVals = &(n->TrainSetInputVals[item*n->NumInputs]);
         n->DesiredOutputVals = &(n->TrainSetDesiredOutputVals[item*n->NumOutputs]);

         /* now do the learning */ 
         bkp_forward(n);
         bkp_backward(n);
      }
   
      /* now that we have gone through the entire training set, calculate the
         RMS to see how well we have learned */

           
      n->Epoch++;

      /* calculate the RMS error for the epoch just completed */
      n->LastRMSError = sqrt(n->RMSSquareOfOutputBetas / (n->NumInTrainSet * n->NumOutputs));
      n->RMSSquareOfOutputBetas = 0.0;
       
#if DYNAMIC_LEARNING
      if (n->Epoch > 1) {
         if (n->PrevRMSError < n->LastRMSError) {
            /* diverging */
            n->NumConsecConverged = 0;
            n->StepSize *= 0.95; /* make step size smaller */
#if CMDIFFSTEPSIZE
            n->HStepSize = 0.1 * n->StepSize;
#endif
#ifdef DISPLAYMSGS
            printf("Epoch: %d Diverging:  Prev %f, New %f, Step size %f\n",
               n->Epoch, n->PrevRMSError, n->LastRMSError, n->StepSize);
#endif
         } else if (n->PrevRMSError > n->LastRMSError) {
            /* converging */
            n->NumConsecConverged++;
            if (n->NumConsecConverged == 5) {
               n->StepSize += 0.04; /* make step size bigger */
#if CMDIFFSTEPSIZE
               n->HStepSize = 0.1 * n->StepSize;
#endif
#ifdef DISPLAYMSGS
               printf("Epoch: %d Converging: Prev %f, New %f, Step size %f\n",
                  n->Epoch, n->PrevRMSError, n->LastRMSError, n->StepSize);
#endif
               n->NumConsecConverged = 0;
            }
         } else {
            n->NumConsecConverged = 0;
         }
      }
      n->PrevRMSError = n->LastRMSError;
#endif
   }
   n->Learned = 1;
   return 0;
}

/*
 * bkp_evaluate - Evaluate but don't learn the current input set.
 * This is usually preceded by a call to bkp_set_input() and is
 * typically called after the training set (epoch) has been learned.
 *
 * If you give eoutputvals as NULL then you can do a bkp_query() to
 * get the results.
 *
 * If you give the address of a buffer to return the results of the
 * evaluation (eoutputvals != NULL) then the results will copied to the
 * eoutputvals buffer.
 *
 * Return Values:
 * int 0: Success
 *    -1: Error, errno is:
 *        ENOENT if no bkp_create_network() has been done yet.
 *        ESRCH if no bkp_set_input() has been done yet.
 *        ENODEV if both bkp_create_network() and bkp_set_input()
 *               have been done but bkp_earn() has not been done
 *               yet (ie; neural net has not had any training).
 *        EINVAL if sizeofoutputvals is not the same as the
 *               size understood according to n. This is to help
 *               prevent buffer overflow during copying.
 */

int bkp_evaluate(bkp_network_t *n, float *eoutputvals, int sizeofoutputvals)
{
   if (!n) {
      errno = ENOENT;
      return -1;
   }
   if (!n->InputReady) {
      errno = ESRCH;
      return -1;
   }
   if (!n->Learned) {
      errno = ENODEV;
      return -1;
   }

   n->InputVals = n->GivenInputVals;
   n->DesiredOutputVals = n->GivenDesiredOutputVals;

   bkp_forward(n);

   if (eoutputvals) {
      if (sizeofoutputvals != n->NumOutputs*sizeof(float)) {
         errno = EINVAL;
         return -1;
      }
      memcpy(eoutputvals, n->OutputVals, n->NumOutputs*sizeof(float));
   }
   return 0;
}

/*
 * bkp_forward - This makes a pass from the input units to the hidden
 * units to the output units, updating the hidden units, output units and
 * other components. This is how the neural network is run in order to
 * evaluate a set of input values to get output values.
 * When training the neural network, this is the first step in the
 * backpropagation algorithm.
 */

static void bkp_forward(bkp_network_t *n)
{
   int i, h, o, b;
   
   n->LearningError = 0.0;

   /*
    * Apply input unit values to weights between input units and hidden units
    * Apply bias unit values to weights between bias units and hidden units
    */

   for (h = 0;  h < n->NumHidden;  h++) {
      n->HiddenVals[h] = 0.0;
      n->HiddenBetas[h] = 0.0; /* needed if doing a backward pass later */
      for (i = 0;  i < n->NumInputs;  i++)
         n->HiddenVals[h] = n->HiddenVals[h] + (n->InputVals[i] * n->IHWeights[i+(h*n->NumInputs)]);
      for (b = 0;  b < n->NumBias;  b++)
         n->HiddenVals[h] = n->HiddenVals[h] + (n->BiasVals[b] * n->BHWeights[b+(h*n->NumBias)]);
      n->HiddenVals[h] = sigmoid(n->HiddenVals[h]);
   }
   
   /*
    * Apply hidden unit values to weights between hidden units and outputs
    * Apply bias unit values to weights between bias units and outputs
    */

   for (o = 0;  o < n->NumOutputs;  o++) {
      n->OutputVals[o] = 0.0;
      for (h = 0;  h < n->NumHidden;  h++)
         n->OutputVals[o] = n->OutputVals[o] + (n->HiddenVals[h] * n->HOWeights[h+(o*n->NumHidden)]);
      for (b = 0;  b < n->NumBias;  b++)
         n->OutputVals[o] = n->OutputVals[o] + (n->BiasVals[b] * n->BOWeights[b+(o*n->NumBias)]);
      n->OutputVals[o] = sigmoid(n->OutputVals[o]);
      n->LearningError = n->LearningError +
         ((n->OutputVals[o] - n->DesiredOutputVals[o]) * (n->OutputVals[o] - n->DesiredOutputVals[o]));
   }
   n->LearningError = n->LearningError / 2.0;
}

/*
 * bkp_backward - This is the 2nd half of the backpropagation algorithm
 * which is carried out immediately after bkp_forward() has done its
 * step of calculating the outputs. This does the reverse, comparing
 * those output values to those given as targets in the training set
 * and updating the weights and other components appropriately, which
 * is essentially the training of the neural network.
 */

static void bkp_backward(bkp_network_t *n)
{
   float deltaweight;
   int i, h, o, b;

   for (o = 0;  o < n->NumOutputs;  o++) {
      /* calculate beta for output units */
      n->OutputBetas[o] = n->DesiredOutputVals[o] - n->OutputVals[o];

      /* update for RMS error */
      n->RMSSquareOfOutputBetas += (n->OutputBetas[o] * n->OutputBetas[o]);

      /* update hidden to output weights */
      for (h = 0;  h < n->NumHidden;  h++) {
         /* calculate beta for hidden units for later */
         n->HiddenBetas[h] = n->HiddenBetas[h] +
            (n->HOWeights[h+(o*n->NumHidden)] * sigmoidDerivative(n->OutputVals[o]) * n->OutputBetas[o]);

#if CMDIFFSTEPSIZE
         deltaweight = n->HiddenVals[h] * n->OutputBetas[o];
#else
         deltaweight = n->HiddenVals[h] * n->OutputBetas[o] *
            sigmoidDerivative(n->OutputVals[o]);
#endif
         n->HOWeights[h+(o*n->NumHidden)] = n->HOWeights[h+(o*n->NumHidden)] +
            (n->StepSize * deltaweight) +
            (n->Momentum * n->PrevDeltaHO[h+(o*n->NumHidden)]);
         n->PrevDeltaHO[h+(o*n->NumHidden)] = deltaweight;
      }
      /* update bias to output weights */
      for (b = 0;  b < n->NumBias;  b++) {
#if CMDIFFSTEPSIZE
         deltaweight = n->BiasVals[b] * n->OutputBetas[o];
#else
         deltaweight = n->BiasVals[b] * n->OutputBetas[o] +
            sigmoidDerivative(n->OutputVals[o]);
#endif
         n->BOWeights[b+(o*n->NumBias)] = n->BOWeights[b+(o*n->NumBias)] +
            (n->StepSize * deltaweight) +
            (n->Momentum * n->PrevDeltaBO[b+(o*n->NumBias)]);
         n->PrevDeltaBO[b+(o*n->NumBias)] = deltaweight;
      }
   }

   for (h = 0;  h < n->NumHidden;  h++) {
      /* update input to hidden weights */
      for (i = 0;  i < n->NumInputs;  i++) {
         deltaweight = n->InputVals[i] * sigmoidDerivative(n->HiddenVals[h]) *
            n->HiddenBetas[h];
         n->IHWeights[i+(h*n->NumInputs)] = n->IHWeights[i+(h*n->NumInputs)] +
#if CMDIFFSTEPSIZE
            (n->HStepSize * deltaweight) +
#else
            (n->StepSize * deltaweight) +
#endif
            (n->Momentum * n->PrevDeltaIH[i+(h*n->NumInputs)]);
         n->PrevDeltaIH[i+(h*n->NumInputs)] = deltaweight;
         if (n->Cost)
            n->IHWeights[i+(h*n->NumInputs)] = n->IHWeights[i+(h*n->NumInputs)] -
               (n->Cost * n->IHWeights[i+(h*n->NumInputs)]);
      }
      /* update bias to hidden weights */
      for (b = 0;  b < n->NumBias;  b++) {
         deltaweight = n->BiasVals[b] * n->HiddenBetas[h] *
            sigmoidDerivative(n->HiddenVals[h]);
         n->BHWeights[b+(h*n->NumBias)] = n->BHWeights[b+(h*n->NumBias)] +
#if CMDIFFSTEPSIZE
            (n->HStepSize * deltaweight) +
#else
            (n->StepSize * deltaweight) +
#endif
            (n->Momentum * n->PrevDeltaBH[b+(h*n->NumBias)]);
         n->PrevDeltaBH[b+(h*n->NumBias)] = deltaweight;
         if (n->Cost)
            n->BHWeights[b+(h*n->NumBias)] = n->BHWeights[b+(h*n->NumBias)] -
               (n->Cost * n->BHWeights[b+(h*n->NumBias)]);
      }
   }
}

/*
 * bkp_query - Get the current state of the neural network.
 *
 * Parameters (all parameters return information unless given as NULL):
 * float *qlastlearningerror: The error for the last set of inputs
 *                            and outputs learned by bkp_learn()
 *                            or evaluated by bkp_evaluate().
 *                            It is the sum of the squares
 *                            of the difference between the actual
 *                            outputs and the target or desired outputs,
 *                            all divided by 2
 * float *qlastrmserror:      The RMS error for the last epoch learned
 *                            i.e. the learning of the training set.
 * float *qinputvals:         An array to fill with the current input
 *                            values (must be at least
 *                            bkp_config_t.NumInputs * sizeof(float))
 * float *qihweights:         An array to fill with the current input
 *                            units to hidden units weights (must be at
 *                            least bkp_config_t.NumInputs *
 *                            bkp_config_t.NumHidden * sizeof(float)
 * float *qhiddenvals:        An array to fill with the current hidden
 *                            unit values (must be at least
 *                            bkp_config_t.NumHidden * sizeof(float))
 * float *qhoweights:         An array to fill with the current hidden
 *                            units to output units weights (must be at
 *                            least bkp_config_t.NumHidden *
 *                            bkp_config_t.NumOutputs * sizeof(float))
 * float *qoutputvals:        An array to fill with the current output
 *                            values (must be at least
 *                            bkp_config_t.NumOutputs * sizeof(float))
 * Note that for the following three, the size required is 1 * ...
 * The reason for the 1 is because there is only one bias unit for
 * everything. Theoretically there could be more though.
 * float *qbhweights:         An array to fill with the current bias
 *                            units to hidden units weights (must be at
 *                            least 1 * bkp_config_t->NumHidden *
 *                            sizeof(float))
 * float *qbiasvals:          An array to fill with the current bias
 *                            values (must be at least 1 * sizeof(float))
 * float *qboweights:         An array to fill with the current bias
 *                            units to output units weights (must be at
 *                            least 1 * (*n)->NumOutputs * sizeof(float))
 *
 * Return Values:
 * int 0: Success
 *    -1: Error, errno is:
 *        ENOENT if no bkp_create_network() has been done yet.
 *        ENODEV if bkp_create_network() has been done
 *               but bkp_learn() has not been done yet (ie; neural
 *               net has not had any training).
 */

int bkp_query(bkp_network_t *n,
      float *qlastlearningerror, float *qlastrmserror,
    float *qinputvals, float *qihweights, float *qhiddenvals,
    float *qhoweights, float *qoutputvals,
      float *qbhweights, float *qbiasvals, float *qboweights)
{
   if (!n) {
      errno = ENOENT;
      return -1;
   }
   if (!n->Learned) {
      errno  = ENODEV;
      return -1;
   }
   if (qlastlearningerror)
      *qlastlearningerror = n->LearningError;
   if (qlastrmserror)
      *qlastrmserror = n->LastRMSError;
   if (qinputvals)
      memcpy(qinputvals, n->InputVals, n->NumInputs*sizeof(float));
   if (qihweights)
      memcpy(qihweights, n->IHWeights, (n->NumInputs*n->NumHidden)*sizeof(float));
   if (qhiddenvals)
      memcpy(qhiddenvals, n->HiddenVals, n->NumHidden*sizeof(float));
   if (qhoweights)
      memcpy(qhoweights, n->HOWeights, (n->NumHidden*n->NumOutputs)*sizeof(float));
   if (qoutputvals)
      memcpy(qoutputvals, n->OutputVals, n->NumOutputs*sizeof(float));
   if (qbhweights)
      memcpy(qbhweights, n->BHWeights, n->NumBias*n->NumHidden*sizeof(float));
   if (qbiasvals)
      memcpy(qbiasvals, n->BiasVals, n->NumBias*sizeof(float));
   if (qboweights)
      memcpy(qboweights, n->BOWeights, n->NumBias*n->NumOutputs*sizeof(float));
   return 0;
}

/*
 * bkp_set_input - Use this to set the current input values of the neural
 * network. Nothing is done with the values until bkp_learn() is called.
 *
 * Parameters:
 * int setall: If 1: Set all inputs to Val. Any sinputvals are ignored so
 *                   you may as well give sinputvals as NULL.
 * float val: See SetAll.
 * float sinputvals: An array of input values.  The array should contain
 *                   bkp_config_t.NumInputs elements.
 *
 * Return Values:
 * int 0: Success
 *    -1: Error, errno is:
 *        ENOENT if no bkp_create_network() has been done yet.
 */

int bkp_set_input(bkp_network_t *n, int setall, float val, float *sinputvals)
{
   int i;

   if (!n) {
      errno = ENOENT;
      return -1;
   }

   if (setall) {
      for (i = 0;  i < n->NumInputs;  i++)
         n->GivenInputVals[i] = val;
   } else {
      memcpy(n->GivenInputVals, sinputvals, n->NumInputs*sizeof(float));
   }

   n->InputReady = 1;
   return 0;
}

/*
 * bkp_set_output - Use this so that bkp_evaluate() can calculate the
 * error between the output values you passs to bkp_set_output() and
 * the output it gets by evaulating the network using the input values
 * you passed to the last call to bkp_set_input(). The purpose is so
 * that you can find out what that error is using bkp_query()'s
 * qlastlearningerror argument. Typically bkp_set_output() will have been
 * accompanied by a call to bkp_set_input().
 *
 * Parameters:
 * int setall: If 1: Set all outputs to val. Any soutputvals
 *                   are ignored so you may as well give
 *                   soutputvals as NULL.
 *             If 0: val is ignored. You must provide soutputvals.
 * float val:  See setall.
 * float sonputvals: An array of input values.  The array should contain
 *                   bkp_config_t.NumInputs elements.
 *
 * Return Values:
 * int 0: Success
 *    -1: Error, errno is:
 *        ENOENT if no bkp_create_network() has been done yet.
 */

int bkp_set_output(bkp_network_t *n, int setall, float val, float *soutputvals)
{
   int i;

   if (!n) {
      errno = ENOENT;
      return -1;
   }

   if (setall) {
      for (i = 0;  i < n->NumOutputs;  i++)
         n->GivenDesiredOutputVals[i] = val;
   } else {
      memcpy(n->GivenDesiredOutputVals, soutputvals, n->NumOutputs*sizeof(float));
   }

   n->DesiredOutputReady = 1;
   return 0;
}

/*
 * bkp_loadfromfile - Creates a neural network using the information
 * loaded from the given file and returns a pointer to it in n.
 * If successful, the end result of this will be a neural network
 * for which bkp_create_network() will effectively have been done.
 *
 * Return Values:
 * int 0: Success
 *    -1: Error, errno is:
 *        EOK or any applicable errors from the open() or read() functions.
 *        ENOMEM if no memory.
 *        EINVAL if the file is not in the correct format.
 */

int bkp_loadfromfile(bkp_network_t **n, char *fname)
{
   char file_format;
   int fd, returncode;
   bkp_config_t config;
   
   returncode = -1;
   
   if ((fd = open(fname, O_RDONLY)) == -1)
      return returncode;

   if (read(fd, &file_format, sizeof(char)) == -1)
      goto cleanupandret;
   if (file_format != 'A') {
      errno = EINVAL;
      goto cleanupandret;
   }
   if (read(fd, &config.Type, sizeof(short)) == -1)
      goto cleanupandret;
   if (read(fd, &config.NumInputs, sizeof(int)) == -1)
      goto cleanupandret;
   if (read(fd, &config.NumHidden, sizeof(int)) == -1)
      goto cleanupandret;
   if (read(fd, &config.NumOutputs, sizeof(int)) == -1)
      goto cleanupandret;
   if (read(fd, &config.StepSize, sizeof(float)) == -1)
      goto cleanupandret;
   if (read(fd, &config.Momentum, sizeof(float)) == -1)
      goto cleanupandret;
   if (read(fd, &config.Cost, sizeof(float)) == -1)
      goto cleanupandret;

   if (bkp_create_network(n, &config) == -1) {
      goto cleanupandret;
   }

   (*n)->InputVals = (*n)->GivenInputVals;
   (*n)->DesiredOutputVals = (*n)->GivenDesiredOutputVals;

   if (read(fd, (int *) &(*n)->NumBias, sizeof(int)) == -1)
      goto errandret;
       
   if (read(fd, (int *) &(*n)->InputReady, sizeof(int)) == -1)
      goto errandret;
   if (read(fd, (int *) &(*n)->DesiredOutputReady, sizeof(int)) == -1)
      goto errandret;
   if (read(fd, (int *) &(*n)->Learned, sizeof(int)) == -1)
      goto errandret;
       
   if (read(fd, (*n)->InputVals, (*n)->NumInputs * sizeof(float)) == -1)
      goto errandret;
   if (read(fd, (*n)->DesiredOutputVals, (*n)->NumOutputs * sizeof(float)) == -1)
      goto errandret;
   if (read(fd, (*n)->IHWeights, (*n)->NumInputs * (*n)->NumHidden * sizeof(float)) == -1)
      goto errandret;
   if (read(fd, (*n)->PrevDeltaIH, (*n)->NumInputs * (*n)->NumHidden * sizeof(float)) == -1)
      goto errandret;
   if (read(fd, (*n)->PrevDeltaHO, (*n)->NumHidden * (*n)->NumOutputs * sizeof(float)) == -1)
      goto errandret;
   if (read(fd, (*n)->PrevDeltaBH, (*n)->NumBias * (*n)->NumHidden * sizeof(float)) == -1)
      goto errandret;
   if (read(fd, (*n)->PrevDeltaBO, (*n)->NumBias * (*n)->NumOutputs * sizeof(float)) == -1)
      goto errandret;
   if (read(fd, (*n)->HiddenVals, (*n)->NumHidden * sizeof(float)) == -1)
      goto errandret;
   if (read(fd, (*n)->HiddenBetas, (*n)->NumHidden * sizeof(float)) == -1)
      goto errandret;
   if (read(fd, (*n)->HOWeights, (*n)->NumHidden * (*n)->NumOutputs * sizeof(float)) == -1)
      goto errandret;
   if (read(fd, (*n)->BiasVals, (*n)->NumBias * sizeof(float)) == -1)
      goto errandret;
   if (read(fd, (*n)->BHWeights, (*n)->NumBias * (*n)->NumHidden * sizeof(float)) == -1)
      goto errandret;
   if (read(fd, (*n)->BOWeights, (*n)->NumBias * (*n)->NumOutputs * sizeof(float)) == -1)
      goto errandret;
   if (read(fd, (*n)->OutputVals, (*n)->NumOutputs * sizeof(float)) == -1)
      goto errandret;
   if (read(fd, (*n)->OutputBetas, (*n)->NumOutputs * sizeof(float)) == -1)
      goto errandret;
       
   returncode = 0;
   goto cleanupandret;

errandret:
   bkp_destroy_network(*n);

cleanupandret:
   close(fd);
   
   return returncode;
}   

/*
 * bkp_savetofile
 *
 * The format of the file is:
 *
 *  1. File format version e.g. 'A' (sizeof(char))
 *  2. Network type BACKPROP_TYPE_* (sizeof(short))
 *  3. Number of inputs (sizeof(int))
 *  4. Number of hidden units (sizeof(int))
 *  5. Number of outputs (sizeof(int))
 *  6. StepSize (sizeof(float))
 *  7. Momentum (sizeof(float))
 *  8. Cost (sizeof(float))
 *  9. Number of bias units (sizeof(int))
 * 10. Is input ready? 0 = no, 1 = yes (sizeof(int))
 * 11. Is desired output ready? 0 = no, 1 = yes (sizeof(int))
 * 12. Has some learning been done? 0 = no, 1 = yes (sizeof(int))
 * 13. Current input values (InputVals) (NumInputs * sizeof(float))
 * 14. Current desired output values (DesiredOutputVals) (NumOutputs * sizeof(float))
 * 15. Current input-hidden weights (IHWeights) (NumInputs * NumHidden * sizeof(float))
 * 16. Previous input-hidden weight deltas (PrevDeltaIH) (NumInputs * NumHidden * sizeof(float))
 * 17. Previous output-hidden weight deltas (PrevDeltaHO) (NumHidden * NumOutputs * sizeof(float))
 * 18. Previous bias-hidden weight deltas (PrevDeltaBH) (NumBias * NumHidden * sizeof(float))
 * 19. Previous bias-output weight deltas (PrevDeltaBO) (NumBias * NumOutputs * sizeof(float))
 * 20. Current hidden unit values (HiddenVals) (NumHidden * sizeof(float))
 * 21. Current hidden unit beta values (HiddenBetas) (NumHidden * sizeof(float))
 * 22. Current hidden-output weights (HOWeights) (NumHidden * NumOutputs * sizeof(float))
 * 23. Current bias unit values (BiasVals) (NumBias * sizeof(float))
 * 24. Current bias-hidden weights (BHWeights) (NumBias * NumHidden * sizeof(float))
 * 25. Current bias-output weights (BOWeights) (NumBias * NumOutputs * sizeof(float))
 * 26. Current output values (OutputVals) (NumOutputs * sizeof(float))
 * 27. Current output unit betas (OutputBetas) (NumOutputs * sizeof(float))
 *
 * Return Values:
 * int 0: Success
 *    -1: Error, errno is:
 *        ENOENT if no bkp_create_network() has been done yet.
 *        EOK or any applicable errors from the open() or write()
 *        functions.
 */

int bkp_savetofile(bkp_network_t *n, char *fname)
{
   int fd, returncode;
   short type = BACKPROP_TYPE_NORMAL;
   
   returncode = -1;
   
   fd = open(fname, O_WRONLY | O_CREAT | O_TRUNC,
         S_IRUSR | S_IWUSR);
         // For Unix/Linux-like environments the following can also be used
         // | S_IRGRP | S_IWGRP | S_IROTH | S_IWOTH);
   if (fd == -1)
      return returncode;
   
   if (write(fd, (char *) "A", sizeof(char)) == -1) // file format version A
      goto cleanupandret;
   if (write(fd, (short *) &type, sizeof(short)) == -1) // BACKPROP_TYPE_*
      goto cleanupandret;
   if (write(fd, (int *) &n->NumInputs, sizeof(int)) == -1)
      goto cleanupandret;
   if (write(fd, (int *) &n->NumHidden, sizeof(int)) == -1)
      goto cleanupandret;
   if (write(fd, (int *) &n->NumOutputs, sizeof(int)) == -1)
      goto cleanupandret;
   if (write(fd, (float *) &n->StepSize, sizeof(float)) == -1)
      goto cleanupandret;
   if (write(fd, (float *) &n->Momentum, sizeof(float)) == -1)
      goto cleanupandret;
   if (write(fd, (float *) &n->Cost, sizeof(float)) == -1)
      goto cleanupandret;

   if (write(fd, (int *) &n->NumBias, sizeof(int)) == -1)
      goto cleanupandret;
       
   if (write(fd, (int *) &n->InputReady, sizeof(int)) == -1)
      goto cleanupandret;
   if (write(fd, (int *) &n->DesiredOutputReady, sizeof(int)) == -1)
      goto cleanupandret;
   if (write(fd, (int *) &n->Learned, sizeof(int)) == -1)
      goto cleanupandret;
       
   if (write(fd, n->InputVals, n->NumInputs * sizeof(float)) == -1)
      goto cleanupandret;
   if (write(fd, n->DesiredOutputVals, n->NumOutputs * sizeof(float)) == -1)
      goto cleanupandret;
   if (write(fd, n->IHWeights, n->NumInputs * n->NumHidden * sizeof(float)) == -1)
      goto cleanupandret;
   if (write(fd, n->PrevDeltaIH, n->NumInputs * n->NumHidden * sizeof(float)) == -1)
      goto cleanupandret;
   if (write(fd, n->PrevDeltaHO, n->NumHidden * n->NumOutputs * sizeof(float)) == -1)
      goto cleanupandret;
   if (write(fd, n->PrevDeltaBH, n->NumBias * n->NumHidden * sizeof(float)) == -1)
      goto cleanupandret;
   if (write(fd, n->PrevDeltaBO, n->NumBias * n->NumOutputs * sizeof(float)) == -1)
      goto cleanupandret;
   if (write(fd, n->HiddenVals, n->NumHidden * sizeof(float)) == -1)
      goto cleanupandret;
   if (write(fd, n->HiddenBetas, n->NumHidden * sizeof(float)) == -1)
      goto cleanupandret;
   if (write(fd, n->HOWeights, n->NumHidden * n->NumOutputs * sizeof(float)) == -1)
      goto cleanupandret;
   if (write(fd, n->BiasVals, n->NumBias * sizeof(float)) == -1)
      goto cleanupandret;
   if (write(fd, n->BHWeights, n->NumBias * n->NumHidden * sizeof(float)) == -1)
      goto cleanupandret;
   if (write(fd, n->BOWeights, n->NumBias * n->NumOutputs * sizeof(float)) == -1)
      goto cleanupandret;
   if (write(fd, n->OutputVals, n->NumOutputs * sizeof(float)) == -1)
      goto cleanupandret;
   if (write(fd, n->OutputBetas, n->NumOutputs * sizeof(float)) == -1)
      goto cleanupandret;
       
   returncode = 0;
       
cleanupandret:
   close(fd);
   
   return returncode;
}
 
regards, saludos

Antonio Linares
www.fivetechsoft.com
User avatar
Antonio Linares
Site Admin
 
Posts: 42118
Joined: Thu Oct 06, 2005 5:47 pm
Location: Spain

Re: Inteligencia artificial - Clase TPerceptron

Postby Antonio Linares » Fri May 26, 2017 6:36 pm

http://inkdrop.net/dave/docs/neural-net-tutorial.cpp

Code: Select all  Expand view  RUN
// neural-net-tutorial.cpp
// David Miller, http://millermattson.com/dave
// See the associated video for instructions: http://vimeo.com/19569529


#include <vector>
#include <iostream>
#include <cstdlib>
#include <cassert>
#include <cmath>
#include <fstream>
#include <sstream>

using namespace std;

// Silly class to read training data from a text file -- Replace This.
// Replace class TrainingData with whatever you need to get input data into the
// program, e.g., connect to a database, or take a stream of data from stdin, or
// from a file specified by a command line argument, etc.

class TrainingData
{
public:
    TrainingData(const string filename);
    bool isEof(void) { return m_trainingDataFile.eof(); }
    void getTopology(vector<unsigned> &topology);

    // Returns the number of input values read from the file:
    unsigned getNextInputs(vector<double> &inputVals);
    unsigned getTargetOutputs(vector<double> &targetOutputVals);

private:
    ifstream m_trainingDataFile;
};

void TrainingData::getTopology(vector<unsigned> &topology)
{
    string line;
    string label;

    getline(m_trainingDataFile, line);
    stringstream ss(line);
    ss >> label;
    if (this->isEof() || label.compare("topology:") != 0) {
        abort();
    }

    while (!ss.eof()) {
        unsigned n;
        ss >> n;
        topology.push_back(n);
    }

    return;
}

TrainingData::TrainingData(const string filename)
{
    m_trainingDataFile.open(filename.c_str());
}

unsigned TrainingData::getNextInputs(vector<double> &inputVals)
{
    inputVals.clear();

    string line;
    getline(m_trainingDataFile, line);
    stringstream ss(line);

    string label;
    ss>> label;
    if (label.compare("in:") == 0) {
        double oneValue;
        while (ss >> oneValue) {
            inputVals.push_back(oneValue);
        }
    }

    return inputVals.size();
}

unsigned TrainingData::getTargetOutputs(vector<double> &targetOutputVals)
{
    targetOutputVals.clear();

    string line;
    getline(m_trainingDataFile, line);
    stringstream ss(line);

    string label;
    ss>> label;
    if (label.compare("out:") == 0) {
        double oneValue;
        while (ss >> oneValue) {
            targetOutputVals.push_back(oneValue);
        }
    }

    return targetOutputVals.size();
}


struct Connection
{
    double weight;
    double deltaWeight;
};


class Neuron;

typedef vector<Neuron> Layer;

// ****************** class Neuron ******************
class Neuron
{
public:
    Neuron(unsigned numOutputs, unsigned myIndex);
    void setOutputVal(double val) { m_outputVal = val; }
    double getOutputVal(void) const { return m_outputVal; }
    void feedForward(const Layer &prevLayer);
    void calcOutputGradients(double targetVal);
    void calcHiddenGradients(const Layer &nextLayer);
    void updateInputWeights(Layer &prevLayer);

private:
    static double eta;   // [0.0..1.0] overall net training rate
    static double alpha; // [0.0..n] multiplier of last weight change (momentum)
    static double transferFunction(double x);
    static double transferFunctionDerivative(double x);
    static double randomWeight(void) { return rand() / double(RAND_MAX); }
    double sumDOW(const Layer &nextLayer) const;
    double m_outputVal;
    vector<Connection> m_outputWeights;
    unsigned m_myIndex;
    double m_gradient;
};

double Neuron::eta = 0.15;    // overall net learning rate, [0.0..1.0]
double Neuron::alpha = 0.5;   // momentum, multiplier of last deltaWeight, [0.0..1.0]


void Neuron::updateInputWeights(Layer &prevLayer)
{
    // The weights to be updated are in the Connection container
    // in the neurons in the preceding layer

    for (unsigned n = 0; n < prevLayer.size(); ++n) {
        Neuron &neuron = prevLayer[n];
        double oldDeltaWeight = neuron.m_outputWeights[m_myIndex].deltaWeight;

        double newDeltaWeight =
                // Individual input, magnified by the gradient and train rate:
                eta
                * neuron.getOutputVal()
                * m_gradient
                // Also add momentum = a fraction of the previous delta weight;
                + alpha
                * oldDeltaWeight;

        neuron.m_outputWeights[m_myIndex].deltaWeight = newDeltaWeight;
        neuron.m_outputWeights[m_myIndex].weight += newDeltaWeight;
    }
}

double Neuron::sumDOW(const Layer &nextLayer) const
{
    double sum = 0.0;

    // Sum our contributions of the errors at the nodes we feed.

    for (unsigned n = 0; n < nextLayer.size() - 1; ++n) {
        sum += m_outputWeights[n].weight * nextLayer[n].m_gradient;
    }

    return sum;
}

void Neuron::calcHiddenGradients(const Layer &nextLayer)
{
    double dow = sumDOW(nextLayer);
    m_gradient = dow * Neuron::transferFunctionDerivative(m_outputVal);
}

void Neuron::calcOutputGradients(double targetVal)
{
    double delta = targetVal - m_outputVal;
    m_gradient = delta * Neuron::transferFunctionDerivative(m_outputVal);
}

double Neuron::transferFunction(double x)
{
    // tanh - output range [-1.0..1.0]

    return tanh(x);
}

double Neuron::transferFunctionDerivative(double x)
{
    // tanh derivative
    return 1.0 - x * x;
}

void Neuron::feedForward(const Layer &prevLayer)
{
    double sum = 0.0;

    // Sum the previous layer's outputs (which are our inputs)
    // Include the bias node from the previous layer.

    for (unsigned n = 0; n < prevLayer.size(); ++n) {
        sum += prevLayer[n].getOutputVal() *
                prevLayer[n].m_outputWeights[m_myIndex].weight;
    }

    m_outputVal = Neuron::transferFunction(sum);
}

Neuron::Neuron(unsigned numOutputs, unsigned myIndex)
{
    for (unsigned c = 0; c < numOutputs; ++c) {
        m_outputWeights.push_back(Connection());
        m_outputWeights.back().weight = randomWeight();
    }

    m_myIndex = myIndex;
}


// ****************** class Net ******************
class Net
{
public:
    Net(const vector<unsigned> &topology);
    void feedForward(const vector<double> &inputVals);
    void backProp(const vector<double> &targetVals);
    void getResults(vector<double> &resultVals) const;
    double getRecentAverageError(void) const { return m_recentAverageError; }

private:
    vector<Layer> m_layers; // m_layers[layerNum][neuronNum]
    double m_error;
    double m_recentAverageError;
    static double m_recentAverageSmoothingFactor;
};


double Net::m_recentAverageSmoothingFactor = 100.0; // Number of training samples to average over


void Net::getResults(vector<double> &resultVals) const
{
    resultVals.clear();

    for (unsigned n = 0; n < m_layers.back().size() - 1; ++n) {
        resultVals.push_back(m_layers.back()[n].getOutputVal());
    }
}

void Net::backProp(const vector<double> &targetVals)
{
    // Calculate overall net error (RMS of output neuron errors)

    Layer &outputLayer = m_layers.back();
    m_error = 0.0;

    for (unsigned n = 0; n < outputLayer.size() - 1; ++n) {
        double delta = targetVals[n] - outputLayer[n].getOutputVal();
        m_error += delta * delta;
    }
    m_error /= outputLayer.size() - 1; // get average error squared
    m_error = sqrt(m_error); // RMS

    // Implement a recent average measurement

    m_recentAverageError =
            (m_recentAverageError * m_recentAverageSmoothingFactor + m_error)
            / (m_recentAverageSmoothingFactor + 1.0);

    // Calculate output layer gradients

    for (unsigned n = 0; n < outputLayer.size() - 1; ++n) {
        outputLayer[n].calcOutputGradients(targetVals[n]);
    }

    // Calculate hidden layer gradients

    for (unsigned layerNum = m_layers.size() - 2; layerNum > 0; --layerNum) {
        Layer &hiddenLayer = m_layers[layerNum];
        Layer &nextLayer = m_layers[layerNum + 1];

        for (unsigned n = 0; n < hiddenLayer.size(); ++n) {
            hiddenLayer[n].calcHiddenGradients(nextLayer);
        }
    }

    // For all layers from outputs to first hidden layer,
    // update connection weights

    for (unsigned layerNum = m_layers.size() - 1; layerNum > 0; --layerNum) {
        Layer &layer = m_layers[layerNum];
        Layer &prevLayer = m_layers[layerNum - 1];

        for (unsigned n = 0; n < layer.size() - 1; ++n) {
            layer[n].updateInputWeights(prevLayer);
        }
    }
}

void Net::feedForward(const vector<double> &inputVals)
{
    assert(inputVals.size() == m_layers[0].size() - 1);

    // Assign (latch) the input values into the input neurons
    for (unsigned i = 0; i < inputVals.size(); ++i) {
        m_layers[0][i].setOutputVal(inputVals[i]);
    }

    // forward propagate
    for (unsigned layerNum = 1; layerNum < m_layers.size(); ++layerNum) {
        Layer &prevLayer = m_layers[layerNum - 1];
        for (unsigned n = 0; n < m_layers[layerNum].size() - 1; ++n) {
            m_layers[layerNum][n].feedForward(prevLayer);
        }
    }
}

Net::Net(const vector<unsigned> &topology)
{
    unsigned numLayers = topology.size();
    for (unsigned layerNum = 0; layerNum < numLayers; ++layerNum) {
        m_layers.push_back(Layer());
        unsigned numOutputs = layerNum == topology.size() - 1 ? 0 : topology[layerNum + 1];

        // We have a new layer, now fill it with neurons, and
        // add a bias neuron in each layer.
        for (unsigned neuronNum = 0; neuronNum <= topology[layerNum]; ++neuronNum) {
            m_layers.back().push_back(Neuron(numOutputs, neuronNum));
            cout << "Made a Neuron!" << endl;
        }

        // Force the bias node's output to 1.0 (it was the last neuron pushed in this layer):
        m_layers.back().back().setOutputVal(1.0);
    }
}


void showVectorVals(string label, vector<double> &v)
{
    cout << label << " ";
    for (unsigned i = 0; i < v.size(); ++i) {
        cout << v[i] << " ";
    }

    cout << endl;
}


int main()
{
    TrainingData trainData("/tmp/trainingData.txt");

    // e.g., { 3, 2, 1 }
    vector<unsigned> topology;
    trainData.getTopology(topology);

    Net myNet(topology);

    vector<double> inputVals, targetVals, resultVals;
    int trainingPass = 0;

    while (!trainData.isEof()) {
        ++trainingPass;
        cout << endl << "Pass " << trainingPass;

        // Get new input data and feed it forward:
        if (trainData.getNextInputs(inputVals) != topology[0]) {
            break;
        }
        showVectorVals(": Inputs:", inputVals);
        myNet.feedForward(inputVals);

        // Collect the net's actual output results:
        myNet.getResults(resultVals);
        showVectorVals("Outputs:", resultVals);

        // Train the net what the outputs should have been:
        trainData.getTargetOutputs(targetVals);
        showVectorVals("Targets:", targetVals);
        assert(targetVals.size() == topology.back());

        myNet.backProp(targetVals);

        // Report how well the training is working, average over recent samples:
        cout << "Net recent average error: "
                << myNet.getRecentAverageError() << endl;
    }

    cout << endl << "Done" << endl;
}
 
regards, saludos

Antonio Linares
www.fivetechsoft.com
User avatar
Antonio Linares
Site Admin
 
Posts: 42118
Joined: Thu Oct 06, 2005 5:47 pm
Location: Spain

Re: Inteligencia artificial - Clase TPerceptron

Postby Antonio Linares » Fri May 26, 2017 6:39 pm

Aqui la Clase TNeuron desarrollada por David Miller y adaptada a Harbour:

Code: Select all  Expand view  RUN
CLASS TNeuron

   DATA nIndex
   DATA nOutput
   DATA aWeights
   DATA nGradient

   CLASSDATA nEta INIT 0.15

   CLASSDATA nAlpha INIT 0.5

   METHOD New( nOutputs, nIndex )

   METHOD FeedForward( aPrevLayer )

   METHOD CalcOutputGradients( nTarget )

   METHOD CalcHiddenGradients( aNextLayer )

   METHOD UpdateInputWeights( aPrevLayer )

   METHOD SumDOW( aNextLayer)

ENDCLASS

METHOD New( nInputs, nIndex ) CLASS TNeuron

   local n

   ::aWeights = Array( nInputs )

   for n = 1 to nInputs
      ::aWeights[ n ] = hb_Random() // rand() / double(RAND_MAX)
   next

   ::nIndex = nIndex

return Self

METHOD UpdateInputWeights( aPrevLayer ) CLASS TNeuron

   local n, oNeuron, nOldDeltaWeight, nNewDeltaWeight

    // The weights to be updated are in the Connection container
    // in the neurons in the preceding layer

    for n = 1 to Len( aPrevLayer )
       oNeuron = aPrevLayer[ n ]
       nOldDeltaWeight = oNeuron:aWeights[ ::nIndex ]:DeltaWeight
       nNewDeltaWeight = ::nEta * oNeuron:nOutput * ::nGradient + ::nAlpha * nOldDeltaWeight
                // Individual input, magnified by the gradient and train rate:
                // Also add momentum = a fraction of the previous delta weight;

       oNeuron:aWeights[ ::nIndex ]:nDeltaWeight = nNewDeltaWeight
       oNeuron:aWeights[ ::nIndex ]:Weight += nNewDeltaWeight
    next

return nil

METHOD SumDOW( aNextLayer ) CLASS TNeuron

   local nSum := 0, n

    // Sum our contributions of the errors at the nodes we feed.

    for n = 1 to Len( aNextLayer )
       nSum += ::aWeights[ n ]:weight * aNextLayer[ n ]:nGradient
    next

return nSum

METHOD CalcHiddenGradients( aNextLayer ) CLASS TNeuron

    local nDow := ::SumDOW( aNextLayer )
   
    ::nGradient = nDow * ( 1.0 - ::nOutput * ::nOutput )

return nil

METHOD CalcOutputGradients( nTarget ) CLASS TNeuron

   local nDelta := nTarget - ::nOutput

   ::nGradient = nDelta * ( 1.0 - ::nOutput * ::nOutput )

return nil

METHOD FeedForward( aPrevLayer ) CLASS TNeuron

   local nSum := 0, n

    // Sum the previous layer's outputs (which are our inputs)
    // Include the bias node from the previous layer.

    for n = 1 to Len( aPrevLayer )
       nSum += aPrevLayer[ n ]:nOutput * ;
               aPrevLayer[ n ]:aWeights[ ::nIndex ]
    next

    ::nOutput = tanh( nSum )

return nil
 
regards, saludos

Antonio Linares
www.fivetechsoft.com
User avatar
Antonio Linares
Site Admin
 
Posts: 42118
Joined: Thu Oct 06, 2005 5:47 pm
Location: Spain

Re: Inteligencia artificial - Clase TPerceptron

Postby Antonio Linares » Fri May 26, 2017 7:56 pm

El código de David Miller portado a Harbour :-)

Podeis ejecutar este ejemplo para inspeccionar la red neuronal

neuralnet.prg
Code: Select all  Expand view  RUN
#include "FiveWin.ch"
// #include "hbclass.ch"

function Main()

   local oNet := TNet():New( { 2, 1 } )

   XBrowser( oNet )

return nil

CLASS TNeuron

   DATA nIndex
   DATA nOutput
   DATA aWeights
   DATA nGradient

   CLASSDATA nEta INIT 0.15

   CLASSDATA nAlpha INIT 0.5

   METHOD New( nOutputs, nIndex )

   METHOD FeedForward( aPrevLayer )

   METHOD CalcOutputGradients( nTarget )

   METHOD CalcHiddenGradients( aNextLayer )

   METHOD UpdateInputWeights( aPrevLayer )

   METHOD SumDOW( aNextLayer)

ENDCLASS

METHOD New( nInputs, nIndex ) CLASS TNeuron

   local n

   ::aWeights = Array( nInputs )

   for n = 1 to nInputs
      ::aWeights[ n ] = hb_Random() // rand() / double(RAND_MAX)
   next

   ::nIndex = nIndex

return Self

METHOD UpdateInputWeights( aPrevLayer ) CLASS TNeuron

   local n, oNeuron, nOldDeltaWeight, nNewDeltaWeight

    // The weights to be updated are in the Connection container
    // in the neurons in the preceding layer

    for n = 1 to Len( aPrevLayer )
       oNeuron = aPrevLayer[ n ]
       nOldDeltaWeight = oNeuron:aWeights[ ::nIndex ]:DeltaWeight
       nNewDeltaWeight = ::nEta * oNeuron:nOutput * ::nGradient + ::nAlpha * nOldDeltaWeight
                // Individual input, magnified by the gradient and train rate:
                // Also add momentum = a fraction of the previous delta weight;

       oNeuron:aWeights[ ::nIndex ]:nDeltaWeight = nNewDeltaWeight
       oNeuron:aWeights[ ::nIndex ]:Weight += nNewDeltaWeight
    next

return nil

METHOD SumDOW( aNextLayer ) CLASS TNeuron

   local nSum := 0, n

    // Sum our contributions of the errors at the nodes we feed.

    for n = 1 to Len( aNextLayer )
       nSum += ::aWeights[ n ]:weight * aNextLayer[ n ]:nGradient
    next

return nSum

METHOD CalcHiddenGradients( aNextLayer ) CLASS TNeuron

    local nDow := ::SumDOW( aNextLayer )
   
    ::nGradient = nDow * ( 1.0 - ::nOutput * ::nOutput )

return nil

METHOD CalcOutputGradients( nTarget ) CLASS TNeuron

   local nDelta := nTarget - ::nOutput

   ::nGradient = nDelta * ( 1.0 - ::nOutput * ::nOutput )

return nil

METHOD FeedForward( aPrevLayer ) CLASS TNeuron

   local nSum := 0, n

    // Sum the previous layer's outputs (which are our inputs)
    // Include the bias node from the previous layer.

    for n = 1 to Len( aPrevLayer )
       nSum += aPrevLayer[ n ]:nOutput * ;
               aPrevLayer[ n ]:aWeights[ ::nIndex ]
    next

    ::nOutput = tanh( nSum )

return nil

CLASS TNet

   DATA  aLayers INIT {}
   DATA  nError
   DATA  nRecentAverageError

   CLASSDATA nRecentAverageSmoothingFactor INIT 100 // Number of training samples to average over

   METHOD New( aTopology )
   METHOD FeedForward( aInput )
   METHOD BackProp( aTarget )
   METHOD GetResults( aResults )

ENDCLASS

METHOD GetResults( aResults ) CLASS TNet

   local n

   aResults = {}

   for n = 1 to Len( ::aLayers )
      aResults[ n ] = ::aLayers[ n ]:GetOutputVal()
   next
   
return nil

METHOD BackProp( aTargetVals ) CLASS TNet

    // Calculate overall net error (RMS of output neuron errors)

    local aOutputLayer := ATail( ::aLayers ), n, m
    local aHiddenLayer, aNextLayer, aLayer, aPrevLayer, nDelta
   
    ::nError = 0

    for n = 1 to Len( aOutputLayer )
       nDelta = aTargetVals[ 1 ] - aOutputLayer[ n ]:nOutput
       ::nError += nDelta * nDelta
    next    

    ::nError /= Len( aOutputLayer ) // get average error squared
    ::nError = sqrt( ::nError ) // RMS

    // Implement a recent average measurement

    ::nRecentAverageError = ( ::nRecentAverageError * ::nRecentAverageSmoothingFactor + ::nError ) ;
                            / ( ::nRecentAverageSmoothingFactor + 1 )

    // Calculate output layer gradients

    for n = 1 to Len( aOutputLayer )
        aOutputLayer[ n ]:CalcOutputGradients( aTargetVals[ n ] )
    next

    // Calculate hidden layer gradients

    for n = Len( ::aLayers ) - 2 to 1 step -1
        aHiddenLayer = ::aLayers[ n ]
        aNextLayer = ::aLayers[ n + 1 ]

        for m = 1 to Len( aHiddenLayer )
           aHiddenLayer[ m ]:CalcHiddenGradients( aNextLayer )
        next
    next

    // For all layers from outputs to first hidden layer,
    // update connection weights

    for n = Len( ::aLayers ) - 1 to 1 step -1
        aLayer = ::aLayers[ n ]
        aPrevLayer = ::aLayers[ n - 1 ]

        for m = 1 to Len( aLayer )
           aLayer[ m ]:UpdateInputWeights( aPrevLayer )
        next
    next
   
return nil

METHOD FeedForward( aInputVals ) CLASS TNet

   local n, m, aPrevLayer

   if Len( aInputVals ) != Len( ::aLayers ) - 1
      MsgInfo( "assert error", "Len( aInputVals ) != Len( ::aLayers ) - 1" )
   endif  

    // Assign (latch) the input values into the input neurons
    for n = 1 to Len( aInputVals )
        ::aLayers[ 1 ][ n ]:nOutput = aInputVals[ n ]
    next

    // forward propagate
    for n = 2 to Len( ::aLayers )
       aPrevLayer = ::aLayers[ n - 1 ]
       for m = 1 to Len( ::aLayers[ n ] ) - 1
           ::aLayers[ n ][ m ]:FeedForward( aPrevLayer )
       next
    next
   
return nil

METHOD New( aTopology ) CLASS TNet

   local nLayers := Len( aTopology ), n, m
 
   for n = 1 to nLayers
      AAdd( ::aLayers, Array( aTopology[ n ] ) )

      // We have a new layer, now fill it with neurons, and
      // add a bias neuron in each layer.
      for m = 1 to aTopology[ n ]
         ::aLayers[ n, m ] = TNeuron():New( Len( aTopology ), m )
         // cout << "Made a Neuron!" << endl;
      next

      // Force the bias node's output to 1.0 (it was the last neuron pushed in this layer):
      ATail( ::aLayers[ n ] ):nOutput = 1
    next

return Self

/*
void showVectorVals(string label, vector<double> &v)
{
    cout << label << " ";
    for (unsigned i = 0; i < v.size(); ++i) {
        cout << v[i] << " ";
    }

    cout << endl;
}
*/
regards, saludos

Antonio Linares
www.fivetechsoft.com
User avatar
Antonio Linares
Site Admin
 
Posts: 42118
Joined: Thu Oct 06, 2005 5:47 pm
Location: Spain

Re: Inteligencia artificial - Clase TPerceptron

Postby Antonio Linares » Fri May 26, 2017 8:03 pm

Por ejemplo, para crear esta red neuronal:

Image

hacemos:

local oNet := TNet():New( { 3, 4, 2 } )
regards, saludos

Antonio Linares
www.fivetechsoft.com
User avatar
Antonio Linares
Site Admin
 
Posts: 42118
Joined: Thu Oct 06, 2005 5:47 pm
Location: Spain

Re: Inteligencia artificial - Clase TPerceptron

Postby Antonio Linares » Fri May 26, 2017 8:28 pm

Arreglado el método GetResults()
Code: Select all  Expand view  RUN
METHOD GetResults() CLASS TNet

   local aResults := Array( Len( ATail( ::aLayers ) ) )
   local n

   for n = 1 to Len( ATail( ::aLayers ) )
      aResults[ n ] = ATail( ::aLayers )[ n ]:nOutput
   next
   
return aResults
regards, saludos

Antonio Linares
www.fivetechsoft.com
User avatar
Antonio Linares
Site Admin
 
Posts: 42118
Joined: Thu Oct 06, 2005 5:47 pm
Location: Spain

Re: Inteligencia artificial - Clase TPerceptron

Postby Antonio Linares » Fri May 26, 2017 8:59 pm

Versión mejorada, aún en fase de pruebas:

Code: Select all  Expand view  RUN
#include "FiveWin.ch"

function Main()

   local oNet := TNet():New( { 1, 2, 1 } ), n
   local x

   for n = 1 to 2000
      oNet:FeedForward( { x := hb_random() } )
      oNet:Backprop( { If( x % 5 == 0, 5, 1 ) } )
   next  

   oNet:FeedForward( { 15 } )
   
   MsgInfo( oNet:nRecentAverageError )
   
   XBrowser( oNet:GetResults() )

   XBrowser( oNet )

return nil

CLASS TNeuron

   DATA nIndex
   DATA nOutput
   DATA aWeights
   DATA aDeltaWeights
   DATA nGradient INIT 0

   CLASSDATA nEta INIT 0.15

   CLASSDATA nAlpha INIT 0.5

   METHOD New( nOutputs, nIndex )

   METHOD FeedForward( aPrevLayer )

   METHOD CalcOutputGradients( nTarget )

   METHOD CalcHiddenGradients( aNextLayer )

   METHOD UpdateInputWeights( aPrevLayer )

   METHOD SumDOW( aNextLayer)

ENDCLASS

METHOD New( nInputs, nIndex ) CLASS TNeuron

   local n

   ::aWeights = Array( nInputs )
   ::aDeltaWeights = Array( nInputs )

   for n = 1 to nInputs
      ::aWeights[ n ] = hb_Random() // rand() / double(RAND_MAX)
      ::aDeltaWeights[ n ] = 0
   next

   ::nIndex = nIndex

return Self

METHOD UpdateInputWeights( aPrevLayer ) CLASS TNeuron

   local n, oNeuron, nOldDeltaWeight, nNewDeltaWeight

    // The weights to be updated are in the Connection container
    // in the neurons in the preceding layer

    for n = 1 to Len( aPrevLayer )
       oNeuron = aPrevLayer[ n ]
       nOldDeltaWeight = oNeuron:aDeltaWeights[ ::nIndex ]
       nNewDeltaWeight = ::nEta * oNeuron:nOutput * ::nGradient + ::nAlpha * nOldDeltaWeight
                // Individual input, magnified by the gradient and train rate:
                // Also add momentum = a fraction of the previous delta weight;

       oNeuron:aDeltaWeights[ ::nIndex ] = nNewDeltaWeight
       oNeuron:aWeights[ ::nIndex ] += nNewDeltaWeight
    next

return nil

METHOD SumDOW( aNextLayer ) CLASS TNeuron

   local nSum := 0, n

    // Sum our contributions of the errors at the nodes we feed.

    for n = 1 to Len( aNextLayer )
       nSum += ::aWeights[ n ] * aNextLayer[ n ]:nGradient
    next

return nSum

METHOD CalcHiddenGradients( aNextLayer ) CLASS TNeuron

    local nDow := ::SumDOW( aNextLayer )
   
    ::nGradient = nDow * ( 1.0 - ::nOutput * ::nOutput )

return nil

METHOD CalcOutputGradients( nTarget ) CLASS TNeuron

   local nDelta := nTarget - ::nOutput

   ::nGradient = nDelta * ( 1.0 - ::nOutput * ::nOutput )

return nil

METHOD FeedForward( aPrevLayer ) CLASS TNeuron

   local nSum := 0, n

    // Sum the previous layer's outputs (which are our inputs)
    // Include the bias node from the previous layer.

    for n = 1 to Len( aPrevLayer )
       nSum += aPrevLayer[ n ]:nOutput * ;
               aPrevLayer[ n ]:aWeights[ ::nIndex ]
    next

    ::nOutput = tanh( nSum )

return nil

CLASS TNet

   DATA  aLayers INIT {}
   DATA  nError
   DATA  nRecentAverageError INIT 0

   CLASSDATA nRecentAverageSmoothingFactor INIT 100 // Number of training samples to average over

   METHOD New( aTopology )
   METHOD FeedForward( aInput )
   METHOD BackProp( aTarget )
   METHOD GetResults()

ENDCLASS

METHOD GetResults() CLASS TNet

   local aResults := Array( Len( ATail( ::aLayers ) ) )
   local n

   for n = 1 to Len( ATail( ::aLayers ) )
      aResults[ n ] = ATail( ::aLayers )[ n ]:nOutput
   next
   
return aResults

METHOD BackProp( aTargetVals ) CLASS TNet

    // Calculate overall net error (RMS of output neuron errors)

    local aOutputLayer := ATail( ::aLayers ), n, m
    local aHiddenLayer, aNextLayer, aLayer, aPrevLayer, nDelta
   
    ::nError = 0

    for n = 1 to Len( aOutputLayer )
       nDelta = aTargetVals[ 1 ] - aOutputLayer[ n ]:nOutput
       ::nError += nDelta * nDelta
    next    

    ::nError /= Len( aOutputLayer ) // get average error squared
    ::nError = sqrt( ::nError ) // RMS

    // Implement a recent average measurement

    ::nRecentAverageError = ( ::nRecentAverageError * ::nRecentAverageSmoothingFactor + ::nError ) ;
                            / ( ::nRecentAverageSmoothingFactor + 1 )

    // Calculate output layer gradients

    for n = 1 to Len( aOutputLayer )
        aOutputLayer[ n ]:CalcOutputGradients( aTargetVals[ n ] )
    next

    // Calculate hidden layer gradients

    for n = Len( ::aLayers ) - 2 to 1 step -1
        aHiddenLayer = ::aLayers[ n ]
        aNextLayer = ::aLayers[ n + 1 ]

        for m = 1 to Len( aHiddenLayer )
           aHiddenLayer[ m ]:CalcHiddenGradients( aNextLayer )
        next
    next

    // For all layers from outputs to first hidden layer,
    // update connection weights

    for n = Len( ::aLayers ) - 1 to 2 step -1
        aLayer = ::aLayers[ n ]
        aPrevLayer = ::aLayers[ n - 1 ]

        for m = 1 to Len( aLayer )
           aLayer[ m ]:UpdateInputWeights( aPrevLayer )
        next
    next
   
return nil

METHOD FeedForward( aInputVals ) CLASS TNet

   local n, m, aPrevLayer

    // Assign (latch) the input values into the input neurons
    for n = 1 to Len( aInputVals )
        ::aLayers[ 1 ][ n ]:nOutput = aInputVals[ n ]
    next

    // forward propagate
    for n = 2 to Len( ::aLayers )
       aPrevLayer = ::aLayers[ n - 1 ]
       for m = 1 to Len( ::aLayers[ n ] ) - 1
           ::aLayers[ n ][ m ]:FeedForward( aPrevLayer )
       next
    next
   
return nil

METHOD New( aTopology ) CLASS TNet

   local nLayers := Len( aTopology ), n, m
 
   for n = 1 to nLayers
      AAdd( ::aLayers, Array( aTopology[ n ] ) )

      // We have a new layer, now fill it with neurons, and
      // add a bias neuron in each layer.
      for m = 1 to aTopology[ n ]
         ::aLayers[ n, m ] = TNeuron():New( Len( aTopology ), m )
         // cout << "Made a Neuron!" << endl;
      next

      // Force the bias node's output to 1.0 (it was the last neuron pushed in this layer):
      ATail( ::aLayers[ n ] ):nOutput = 1
    next

return Self

/*
void showVectorVals(string label, vector<double> &v)
{
    cout << label << " ";
    for (unsigned i = 0; i < v.size(); ++i) {
        cout << v[i] << " ";
    }

    cout << endl;
}
*/
regards, saludos

Antonio Linares
www.fivetechsoft.com
User avatar
Antonio Linares
Site Admin
 
Posts: 42118
Joined: Thu Oct 06, 2005 5:47 pm
Location: Spain

Re: Inteligencia artificial - Clase TPerceptron

Postby Antonio Linares » Thu Jun 01, 2017 4:13 pm

Inspeccionando la red neuronal:

Code: Select all  Expand view  RUN
#include "FiveWin.ch"

function Main()

   local oNet := TNet():New( { 1, 2, 1 } ), n
   local x

   while oNet:nRecentAverageError < 0.95
      oNet:FeedForward( { x := nRandom( 1000 ) } )
      oNet:Backprop( { If( x % 5 == 0, 5, 1 ) } )
   end  

   oNet:FeedForward( { 15 } )
   
   XBROWSER ArrTranspose( { "Layer 1 1st neuron" + CRLF + "Input:" + Str( oNet:aLayers[ 1 ][ 1 ]:nOutput ) + ;
                                                   CRLF + "Weigth 1:" + Str( oNet:aLayers[ 1 ][ 1 ]:aWeights[ 1 ], 4, 2 ), ;
                            { "Layer 2, 1st neuron" + CRLF + "Weigth 1: " + Str( oNet:aLayers[ 2 ][ 1 ]:aWeights[ 1 ] ) + ;
                                                      CRLF + "Output: " + Str( oNet:aLayers[ 2 ][ 1 ]:nOutput ),;
                            "Layer 2, 2nd neuron" + CRLF + "Weight 1: " + Str( oNet:aLayers[ 2 ][ 2 ]:aWeights[ 1 ] ) + ;
                                                    CRLF + "Output: " + Str( oNet:aLayers[ 2 ][ 2 ]:nOutput ) },;
                            "Layer 3 1st neuron" + CRLF + "Weigth 1: " + Str( oNet:aLayers[ 3 ][ 1 ]:aWeights[ 1 ] ) + ;
                                                   CRLF + "Weigth 2: " + Str( oNet:aLayers[ 3 ][ 1 ]:aWeights[ 2 ] ) + ;
                                                   CRLF + "Output: " + Str( oNet:aLayers[ 2 ][ 2 ]:nOutput ) } ) ;
      SETUP ( oBrw:nDataLines := 4,;
              oBrw:aCols[ 1 ]:nWidth := 180,;
              oBrw:aCols[ 2 ]:nWidth := 180,;
              oBrw:aCols[ 3 ]:nWidth := 180,;
              oBrw:nMarqueeStyle := 3 )                      
   
return nil


Image
regards, saludos

Antonio Linares
www.fivetechsoft.com
User avatar
Antonio Linares
Site Admin
 
Posts: 42118
Joined: Thu Oct 06, 2005 5:47 pm
Location: Spain

PreviousNext

Return to FiveWin para Harbour/xHarbour

Who is online

Users browsing this forum: D.Fernandez and 30 guests