High dynamic range (HDR) imaging provide larger range of luminosity and wider color gamut than conventional low dynamic range (LDR) imaging. The method which transforms LDR contents to HDR contents is called inverse tone mapping. After deep neural networks are used in inverse tone mapping problem, researchers mostly focus on transforming normal exposure LDR images to HDR. However, when people use inverse tone mapping in practice, they get some ill-exposed images as well. The state-of-art algorithms can’t transform these images to HDR well.In this work, we propose an end-to-end multi-exposure inverse tone mapping (MITM) framework based on existing generative adversarial network (GAN). This framework can transform a single LDR image not only at normal exposure, but also at unsuitable exposure to a normal exposure HDR image. We use histogram equalization to preprocess the luma of the input LDR images; when training the model, we use intrinsic image decomposition to divide the output HDR images into illuminance and reflectance components and use these two components to constrain the luminance information and the color information separately. This framework can adjust the unsuitable exposure and provide a better viewing experience than other state-of-art algorithms in the experimental results.