深度神經網路應用於雙軸平台之熱誤差估測

Autor: 黃子峻
Rok vydání: 2018
Druh dokumentu: 學位論文 ; thesis
Popis: 106
When the machine tool is processed, it often heats up due to high-speed friction between the parts to cause thermal deformation, which reduces the machining accuracy. Therefore, if the thermal deformation during machining is known in advance, the machine tool can be compensated, effectively. Based on the two-axis platform,the neural network and deep learning techniques are applied to predict the thermal deformation of the ball screw during processing through several fixed monitor values on the two-axis platform. This study used the Backpropagation Network (BPN) and the Long Short-Term Memory Model (LSTM) of the Recurrent Neural Networks, using seven different processing data to train and test the networkroad including three single processing situation and the four composite processing situations. The input data of the network is measured by the experiment data of the motor temperature, the support end temperature, the room temperature, the nut temperature, the room temperature, the nut speed, the nut position, and the distance traveled by the platform; the output data replaces the thermal deformation of the screw by the temperature rise of the screw. Because the thermal deformation is easily affected by the measurement of the motor end and the fixed end. However, according to the one-dimensional linear expansion formula between the thermal deformation and the temperature rise, the two are only the proportional relationship. Therefore, it is predicted that the screw temperature rises only need to predict the thermal deformation according to a proportional relationship. According to the experimental conditions of the FEM, the maximum strock of the screw is 900mm. The FEM simulates the screw temperature rise with a total of 91 monitoring points per 10mm.Total of 91 outputs. In this study,the backpropagation network is a network of five hidden layer structures constructed in the environment of MATLAB2017a. Each hidden layer has 20 neurons, and the optimization function is Scaled Conjugate Gradient (SCG). The long short-term memory model uses the Keras to construct two hidden layer structures in the learning framework of TensorFlow. Each hidden layer has 20 neurons, and the optimization function is RMSProp algorithm. The cost functions set by the two networks are mean square error(MSE). In the network training, four working conditions are used for training and testing,the other three working conditions are only for testing. When the training results are tested under the same working conditions as the training conditions, the maximum error is about 1 °C. When testing in non-training conditions, the maximum error is about 2 °C, and there is a certain degree of accuracy in terms of prediction results.
Databáze: Networked Digital Library of Theses & Dissertations