Popis: |
Training a deep neural network (DNN) requires a high computational cost. Buying models from sellers with a large number of computing resources has become prevailing. However, the buyer-seller environment is not always trusted. To protect the neural network models from leaking in an untrusted environment, we propose a novel copyright protection scheme for DNN using an input-sensitive neural network (ISNN). The main idea of ISNN is to make a DNN sensitive to the key and copyright information. Therefore, only the buyer with a correct key can utilize the ISNN. During the training phase, we add a specific perturbation to the clean images and mark them as legal inputs, while the other inputs are treated as illegal input. We design a loss function to make the outputs of legal inputs close to the true ones, while the illegal inputs are far away from true results. Experimental results demonstrate that the proposed scheme is effective, valid, and secure. |