0

I am training a simple neural network in keras to fit my non-linear thermodynamic equation of state. I use backpropagation and stochastic gradient. The network approximates the equation of state but is far from being good. I added the code here:

import numpy as np
import CoolProp.CoolProp as CP

import matplotlib.pyplot as plt

from keras.models import Sequential
from keras.layers.core import Dense
from keras.optimizers import SGD

# vector length
nT = 20000
nP = 100
T_min = 50
T_max = 250

fluid = 'nitrogen'

# get critical pressure
p_c = CP.PropsSI(fluid,'pcrit')

p_vec = np.linspace(1,3,nP)
p_vec = (p_vec)*p_c

T_vec = np.zeros((nT))
T_vec[:] = np.linspace(T_min, T_max, nT)

rho_vec = np.zeros((nT))

print('Generate data ...')
for i in range(0,nT):
    rho_vec[i] = CP.PropsSI('D','T',T_vec[i],'P',1.1*p_c,fluid)

# normalize
T_max = max(T_vec)
rho_max = max(rho_vec)

T_norm = T_vec/T_max
rho_norm = rho_vec/rho_max


print('set up ANN')
######################
model = Sequential()
model.add(Dense(200, activation='relu', input_dim=1, init='uniform'))
model.add(Dense(100, activation='relu', init='uniform'))
model.add(Dense(output_dim=1, activation='linear'))

sgd = SGD(lr=0.05, decay=1e-6, momentum=0.9, nesterov=False)
model.compile(loss='mean_squared_error', optimizer='sgd', metrics=['accuracy'])

# fit the model
history = model.fit(T_norm,rho_norm,epochs=20,batch_size=10,validation_split=0.3,shuffle=True)

predict = model.predict(T_norm,batch_size=10)

plt.plot(predict)
plt.plot(rho_norm)
plt.show()

The outcome is the following (blue is the ANN prediction): blue

However, when I use the same model, with the same architecture on a sine it actually predicts quite well (blue again the ANN): enter image description here

How can I tune my model so that it accurately predicts my thermodynamic equation of state?

Max86
  • 1
  • 1
  • It looks like your model is learning the thermodynamic function just seems like it hasn't gone through enough iterations. Have you tried bumping up the number of iterations /epochs? Also, plotting the training error and test error vs. the iteration number can help you see what the ideal number of iterations should be. – guy Aug 30 '17 at 13:33
  • Thanks for that hint. Indeed, I increased the epochs to 3000. After half a day of fitting process I obtained a quite nice result. However, isn't there any faster way for nonlinear regression? I assume it shouldn't be computationally that demanding. – Max86 Sep 01 '17 at 16:51
  • If you have a new question, please ask it by clicking the [Ask Question](https://stats.stackexchange.com/questions/ask) button. Include a link to this question if it helps provide context. - [From Review](/review/low-quality-posts/158941) – kjetil b halvorsen Sep 01 '17 at 17:31
  • @Max86 You do have hundreds of neurons, I think that would be the next thing to look at reducing if possible. Also, I have found in practice the `Adam` optimizer learns faster than `SGD` for relatively simple functions, so perhaps giving that a try as well and comparing it to the `SGD` results would help. – guy Sep 01 '17 at 18:24

0 Answers0