vendredi 1 juillet 2016

having two input matrices in regression


I wish to learn some coefficients of which some are sparse, and others are simply regularised as usual. The sparse part is done keeping Relevance Vector Machines in mind. My model is as follows:

with pm.Model() as model:
    b0 = pm.Normal('b0',mu=0,sd=10)
    beta = pm.Normal('beta',mu=0,sd=30,shape = x_train.shape[1]) #normal coefficients

    #sparse weights
    alpha = pm.Gamma('alpha',1e-4,1e-4,shape = Phi_train.shape[1])
    beta_s = pm.Normal('beta_s',mu=0,tau=alpha,shape = Phi_train.shape[1]) #sparse betas

    # Likelihood - NOTE x_train and Phi_train are the two INPUT matrices
    mu = b0 + x_train*beta.T + Phi_train*beta_s.T 
    inv_sigma = pm.Gamma('sigma',1e-4,1e-4)
    y_est = pm.Normal('y_est', mu=mu, tau= inv_sigma, observed=y_train)

except that it doesnt seem to like the mu = b0... line. If I get rid of either x_train*beta.T or Phi_train*beta_s.T it compiles fine. Otherwise it complains of the following error: ValueError: Input dimension mis-match. (input[0].shape[1] = 35, input[1].shape[1] = 500)

So the sizes of the two matrices are: (210042, 35) and (210042, 500). So am I doing something wrong here?


Aucun commentaire:

Enregistrer un commentaire