I have a large set of data that in terms of Intensity counts and Wavelength that I want fit with Planks Law to determine the guess parameter for Temperature.
The Data set is imported as a text file
import numpy as np
import scipy as sp
from scipy.optimize import curve_fit
with open('Good_SolarRun2.txt') as g:
data = g.read()
data = data.split('n')
Wavle2 = [float(row.split()[0]) for row in data] # Wavelength (nm)
Int2 = [float(row.split()[1]) for row in data] # Intensity (counts)
So I now define the fitting model for Planks Law (In terms of wavelength)
https://en.wikipedia.org/wiki/Planck%27s_law
from scipy.constants import h,k,c
def Plancks_Law(lamb, T):
a = np.float32( 2.0*h*c*c ) # J m^{-2} s^{-3}
b = np.float32( (h*c)/(k) ) # K m s^{-2}
return a/np.power(lamb,5.0) * 1/( np.exp( b/(lamb*T)) - 1 )
So now, I go about setting up the curve_fit configuration with my data set.
# Convert Wavelength Arrays from nano-meters to meters
x = [data*1e-9 for data in Wavle2]
# This has the same shape as Wavle2, but these values are scaled by me.
# Also the same shape as Int2
y = np.array(Intscale2p)
p0_R = (5000.)
optR, pcovR = curve_fit(Plancks_Law, x, y, p0_R)
T_R = optR
T_Rp = pcovR
yM = Plancks_Law(x, T_R)
This is not working for me. The parameter returns T_R = 5000, which is the value is et for the best guess.
Am I doing something incorrect with the fits?
Aucun commentaire:
Enregistrer un commentaire