Motion compensated temporal filtering is a useful framework for fully scalable video compression schemes. However, when supposed motion models cannot represent a real motion perfectly, both the temporal high and the temporal low frequency sub-bands may contain artificial edges, which possibly lead to a decreased coding efficiency, and ghost artifacts appear in the reconstructed video sequence at lower bit rates or in case of temporal scaling. We propose a new technique that is based on utilizing visual models to mitigate ghosting artifacts in the temporal low frequency sub-bands. Specifically, we propose content adaptive update schemes where visual models are used to determine image dependent upper bounds on information to be updated. Experimental results show that the proposed algorithm can significantly improve subjective visual quality of the low-pass temporal frames and at the same time, coding performance can catch or exceed the classical update steps.