Video coding has been exploring the compact representation of video data, where perceptual redundancies in addition to signal redundancies are removed for higher compression. Many research efforts have been dedicated to modeling the human visual system’s characteristics. The resulting models have been integrated into video coding frameworks in different ways. Among them, coding enhancements with the just noticeable distortion (JND) model have drawn much attention in recent years due to its signiﬁcant gains. A common application of the JND model is the adjustment of quantization by a multiplying factor corresponding to the JND threshold. In this paper, we propose an alternative perceptual video coding method to improve upon the current H.264/AVC framework based on an independent JND-directed suppression tool. This new tool is capable of ﬁnely tuning the quantization using a JND-normalized error model. To make full use of this new rate distortion adjustment component the Lagrange multiplier for rate distortion optimization is derived in terms of the equivalent distortion. Because H.264/AVC integer Discrete Cosine Transform (DCT) is different from classic DCT, on which state-of-the-art JND models are computed, we analytically derive a JND mapping formula between the integer DCT domain and the classic DCT domain which permits us reuse of the JND models in a more natural way. In addition, the JND threshold can be reﬁned by adopting a saliency algorithm in the coding framework and we reduce the complexity of the JND computation by reusing the motion estimation of the encoder. Another beneﬁt of the proposed scheme is it remains fully compliant with the existing H.264/AVC standard. Subjective experimental results show that signiﬁcant bit saving can be obtained using our method while maintaining a similar visual quality to the traditional H.264/AVC coded video.