For example, part of setting up a deep neural network is deciding how many “hidden” layers of nodes to use between the input layer and the output layer, as well as how many nodes each layer should use. If model parameters are variables that get adjusted by training with existing data, your hyperparameters are the variables about the training process itself. In many ways, your model’s parameters are the model-they are what distinguishes your particular model from other models of the same type working on similar data. Those weights are an example of your model’s parameters. When your DNN is trained, each node has a weight value that tells your model how much impact it has on the final prediction. For example, a deep neural network (DNN) is composed of processing nodes (neurons), each with an operation performed on data as it travels through the network. Your model’s parameters are the variables that your chosen machine learning technique uses to adjust to your data. However, the actual values in your input data never directly become part of your model. This data is used during training to configure your model to accurately make predictions about new instances of similar data. Your input data (also called training data) is a collection of individual records (instances) containing the features important to your machine learning problem. Your trainer handles three categories of data as it trains your model: If you’re new to machine learning, you may have never encountered the term hyperparameters before.
0 Comments
Leave a Reply. |