电子产业一站式赋能平台

PCB联盟网

搜索
查看: 29|回复: 0
收起左侧

一句命令实现神经网络超参数优化

[复制链接]
匿名  发表于 2023-5-20 16:57:00 |阅读模式

点击上方蓝字关注我们
  今天是520,Lvy来给大家表示爱意啦~
   为什么大家都这么爱用神经网络呢?当然是因为它的效果确实好,做个传统的回归或者分类任务确实能达到不错的效果




MATLAB 新旧网络工具箱的测评
    用MATLAB实现BP神经网络还是很容易的,以下是不是大家经常可见的神经网络构造方式了,很遗憾这种方式已经out了
  • net=newff(x_train_regular,y_train_regular,[6],{'tansig'});小编2022版本MATLABhelp newff发现它已经过时了,并且早就被遗弃了,不过用起来还是没问题的

    不过小编发现了一个新的matlab神经网络工具箱函数【只有在MATLAB2021b 以上的版本才有
  • net=fitrnet(x_train_regular,y_train_regular,"LayerSizes",6,'Activations',{'tanh'});
    小编测试了一下这两种神经网络实现方法,发现确实是新版的fitrnet函数运算速度更快,效果更好


             旧版实现回归方式
  • clc;clear;close all;load('abalone_data.mat')%%[m,n]=size(data);train_num=round(0.8*m); %自变量 x_train_data=data(1:train_num,1:n-1);y_train_data=data(1:train_num,n);x_test_data=data(train_num+1:end,1:n-1);y_test_data=data(train_num+1:end,n);x_train_data=x_train_data';y_train_data=y_train_data';x_test_data=x_test_data';[x_train_regular,x_train_maxmin] = mapminmax(x_train_data);[y_train_regular,y_train_maxmin] = mapminmax(y_train_data);%创建网络t1=clock;net=newff(x_train_regular,y_train_regular,[6],{'tansig'});% net=newff(x_train_regular,y_train_regular,[6,3,3],{'logsig','tansig','logsig','purelin'});%  net=newff(x_train_regular,y_train_regular,6,{'logsig','logsig'});% net=newff(x_train_regular,y_train_regular,6,{'logsig','purelin'});% net=newff(x_train_regular,y_train_regular,6,{'logsig','tansig'});% %设置训练次数% net.trainParam.epochs = 50000;% %设置收敛误差% net.trainParam.goal=0.000001;% newff(P,T,S,TF,BTF,BLF,PF,IPF,OPF,DDF) takes optional inputs,%      TF- Transfer function of ith layer. Default is 'tansig' for%              hidden layers, and 'purelin' for output layer.%%激活函数的设置%     compet - Competitive transfer function.%     elliotsig - Elliot sigmoid transfer function.%     hardlim - Positive hard limit transfer function.%     hardlims - Symmetric hard limit transfer function.%     logsig - Logarithmic sigmoid transfer function.%     netinv - Inverse transfer function.%     poslin - Positive linear transfer function.%     purelin - Linear transfer function.%     radbas - Radial basis transfer function.%     radbasn - Radial basis normalized transfer function.%     satlin - Positive saturating linear transfer function.%     satlins - Symmetric saturating linear transfer function.%     softmax - Soft max transfer function.%     tansig - Symmetric sigmoid transfer function.%     tribas - Triangular basis transfer function.%训练网络[net,~]=train(net,x_train_regular,y_train_regular);%将输入数据归一化x_test_regular = mapminmax('apply',x_test_data,x_train_maxmin);%放入到网络输出数据y_test_regular=sim(net,x_test_regular);%将得到的数据反归一化得到预测数据BP_predict=mapminmax('reverse',y_test_regular,y_train_maxmin);%%BP_predict=BP_predict';errors_nn=sum(abs(BP_predict-y_test_data)./(y_test_data))/length(y_test_data);t2=clock;Time_all=etime(t2,t1);disp(['运行时间:',num2str(Time_all)])figure;color=[111,168,86;128,199,252;112,138,248;184,84,246]/255;plot(y_test_data,'Color',color(2,:),'LineWidth',1)hold onplot(BP_predict,'*','Color',color(1,:))hold ontitlestr=['MATLAB自带newff神经网络','   误差为:',num2str(errors_nn)];title(titlestr)disp(titlestr)运行时间:0.794
    MATLAB自带newff神经网络   误差为:0.15142


             新版实现回归方式
  • clc;clear;close all;load('abalone_data.mat')%%[m,n]=size(data);train_num=round(0.8*m); %自变量 x_train_data=data(1:train_num,1:n-1);y_train_data=data(1:train_num,n);x_test_data=data(train_num+1:end,1:n-1);y_test_data=data(train_num+1:end,n);x_train_data=x_train_data';y_train_data=y_train_data';x_test_data=x_test_data';[x_train_regular,x_train_maxmin] = mapminmax(x_train_data);[y_train_regular,y_train_maxmin] = mapminmax(y_train_data);
    x_train_regular=x_train_regular';y_train_regular=y_train_regular';
    t1=clock;net=fitrnet(x_train_regular,y_train_regular,"LayerSizes",6,'Activations',{'tanh'});%%figure;%训练过程中的LOSSiteration = net.TrainingHistory.Iteration;trainLosses = net.TrainingHistory.TrainingLoss;plot(iteration,trainLosses)legend(["Training"])xlabel("Iteration")ylabel("Mean Squared Error")%%%%%   LayerSizes - Sizes of fully connected layers%             10 (默认值) | positive integer vector%         Activations - Activation functions for fully connected layers%             'relu' (默认值) | 'tanh' | 'sigmoid' | 'none' | string array |%             cell array of character vectors%         LayerWeightsInitializer - Function to initialize fully connected%         layer weights%             'glorot' (默认值) | 'he'%         LayerBiasesInitializer - Type of initial fully connected layer%         biases%             'zeros' (默认值) | 'ones'%         ObservationsIn - Predictor data observation dimension%             'rows' (默认值) | 'columns'%         Lambda - Regularization term strength%             0 (默认值) | nonnegative scalar%         Standardize - Flag to standardize predictor data%             false or 0 (默认值) | true or 1%         Verbose - Verbosity level%             0 (默认值) | 1%         VerboseFrequency - Frequency of verbose printing%             1 (默认值) | positive integer scalar%         StoreHistory - Flag to store training history%             false or 0 | true or 1 (默认值)%         IterationLimit - Maximum number of training iterations%             1e3 (默认值) | positive integer scalar%         GradientTolerance - Relative gradient tolerance%             1e-6 (默认值) | nonnegative scalar%         LossTolerance - Loss tolerance%             1e-6 (默认值) | nonnegative scalar%         StepTolerance - Step size tolerance%             1e-6 (默认值) | nonnegative scalar%         ValidationData - Validation data for training convergence detection%             cell array | table%         ValidationFrequency - Number of iterations between validation%         evaluations%             1 (默认值) | positive integer scalar%         ValidationPatience - Stopping condition for validation evaluations%             6 (默认值) | nonnegative integer scalar%         CategoricalPredictors - Categorical predictors list%             vector of positive integers | logical vector |%             character matrix | string array |%             cell array of character vectors | 'all'%         PredictorNames - Predictor variable names%             string array of unique names |%             cell array of unique character vectors%         ResponseName - Response variable name%             "Y" (默认值) | character vector | string scalar%         Weights - Observation weights%             nonnegative numeric vector | name of variable in Tbl%         CrossVal - Flag to train cross-validated model%             'off' (默认值) | 'on'%         CVPartition - Cross-validation partition%             cvpartition partition object | [] (默认值)%         Holdout - Fraction of data for holdout validation%             scalar value in the range (0,1)%         KFold - Number of folds%             10 (默认值) | positive integer value greater than 1%         Leaveout - Leave-one-out cross-validation flag%             'off' (默认值) | 'on'%         OptimizeHyperparameters - Parameters to optimize%             'none' (默认值) | 'auto' | 'all' |%             string array or cell array of eligible parameter names |%             vector of optimizableVariable objects%         HyperparameterOptimizationOptions - Options for optimization%             structure%将输入数据归一化x_test_regular = mapminmax('apply',x_test_data,x_train_maxmin);x_test_regular=x_test_regular';%放入到网络输出数据y_test_regular=predict(net,x_test_regular);%将得到的数据反归一化得到预测数据BP_predict=mapminmax('reverse',y_test_regular,y_train_maxmin);%%errors_nn=sum(abs(BP_predict-y_test_data)./(y_test_data))/length(y_test_data);t2=clock;Time_all=etime(t2,t1);disp(['运行时间:',num2str(Time_all)])figure;color=[111,168,86;128,199,252;112,138,248;184,84,246]/255;plot(y_test_data,'Color',color(2,:),'LineWidth',1)hold onplot(BP_predict,'*','Color',color(1,:))hold ontitlestr=['MATLAB自带fitrnet神经网络','   误差为:',num2str(errors_nn)];title(titlestr)disp(titlestr)运行时间:0.601
    MATLAB自带fitrnet神经网络   误差为:0.14456

    一行命令实现贝叶斯优化神经网络超参数
        你还在为调超参数烦恼嘛,不知道选择几层网络,几个神经元,用什么激活函数?那你一定不能错过这个MATLAB自带的神经网络调参工具,一句命令就可以实现自动调参
  • clc;clear;close all;load('abalone_data.mat')[m,n]=size(data);train_num=round(0.8*m); %自变量 x_train_data=data(1:train_num,1:n-1);y_train_data=data(1:train_num,n);x_test_data=data(train_num+1:end,1:n-1);y_test_data=data(train_num+1:end,n);x_train_data=x_train_data';y_train_data=y_train_data';x_test_data=x_test_data';[x_train_regular,x_train_maxmin] = mapminmax(x_train_data);[y_train_regular,y_train_maxmin] = mapminmax(y_train_data);
    x_train_regular=x_train_regular';y_train_regular=y_train_regular';
    optimize_num=10;% 使用贝叶斯网络进行优化Mdl = fitrnet(x_train_regular,y_train_regular,"OptimizeHyperparameters","auto", ...    "HyperparameterOptimizationOptions",struct("AcquisitionFunctionName","expected-improvement-plus",'MaxObjectiveEvaluations',optimize_num));%% 测试一下效果x_test_regular = mapminmax('apply',x_test_data,x_train_maxmin);x_test_regular=x_test_regular';%放入到网络输出数据y_test_regular=predict(Mdl,x_test_regular);%将得到的数据反归一化得到预测数据BP_predict=mapminmax('reverse',y_test_regular,y_train_maxmin);errors_nn=sum(abs(BP_predict-y_test_data)./(y_test_data))/length(y_test_data);
    figure;color=[111,168,86;128,199,252;112,138,248;184,84,246]/255;plot(y_test_data,'Color',color(2,:),'LineWidth',1)hold onplot(BP_predict,'*','Color',color(1,:))hold ontitlestr=['MATLAB自带优化神经网络','   误差为:',num2str(errors_nn)];title(titlestr)disp(titlestr)

    观测到的最佳可行点:
        Activations    Standardize      Lambda      LayerSizes
        ___________    ___________    __________    __________
          sigmoid         false       2.9879e-07     31     1
    妥妥的一键式优化,直接把激活函数,正则化系数以及网络层数和神经元优化出来了!


    【数据和源码在好玩的MATLAB】的小公众号【Lvy的口袋中】回复关键词【新神经网络】领取奥~
    可以关注一下小号【Lvy的口袋】,分享知识和生活~


    点个在看你最好看
  • 本帖子中包含更多资源

    您需要 登录 才可以下载或查看,没有账号?立即注册

    x
    回复

    使用道具

    发表回复

    您需要登录后才可以回帖 登录 | 立即注册

    本版积分规则


    联系客服 关注微信 下载APP 返回顶部 返回列表