Popis: |
Conventional discrete representations of 3D objects have been replaced by representations that are implicitly described and continuously differentiable. With the increase in popularity of deep neural networks, parameterization of these continuous functions has emerged as a powerful paradigm. Various machine learning problems like inferring information from 3D images, videos and scene reconstruction require continuous parameterization as they yield memory efficiency, allowing the model to produce finer details. In this paper, improvement of implicit shape representation has been proposed by investigating the neural architecture of periodic activation functions-based networks. To demonstrate the effect of network size and depth on shape quality and detail, we conduct both qualitative and quantitative experiments. |