Intra prediction is the key technology to reduce spatial redundancy in the modern video coding standard. Recently, deep learning based methods that directly generate the intra prediction by neural network achieve superior performance than traditional directional based intra prediction. However, these methods lack the ability to handle complex blocks which contain mixed directional textures or recurrent patterns since they only use the neighboring reference samples of the current one. The other intermediate information denoted as reference priors in this paper generated during the coding process is not exploited. In this paper, a Current Frame Priors assisted Neural Network (CFPNN) is presented to improve the intra prediction efficiency. Specifically, we utilize the local contextual information provided by the neighboring multiple references as the primary inference source. In addition to the neighboring references, we additionally use the other two reference priors within the current frame – the predictor searched by intra block copy (IntraBC) and the corresponding residual component. The IntraBC predictor provides useful nonlocal information to help generate more accurate prediction for complex blocks together with neighboring local information. While the residual component contains unique information that reflects the characteristics of the block to some extent is utilized to reduce the noise contained in the reconstructed reference samples. Moreover, we investigate the best way to integrate the proposed method into the codec. Experimental results demonstrate that compared to HEVC, our proposed CFPNN achieves an average of 4.1% BD-rate reduction for the luma component under the All Intra configuration.