The spread of misinformation, including fake news and unfounded rumors, poses a significant threat to the integrity of information ecosystems and public trust. The emergence of Large Language Models (LLMs) has great potential to reshape the landscape of combating misinformation. LLMs can be a double-edged sword, offering both challenges and opportunities in the fight against misinformation. On one hand, their extensive knowledge and advanced reasoning capabilities make them potent tools for identifying and countering misinformation. However, on the other hand, their growing accessibility and ability to produce credibly-sounding text also pose a risk of being exploited to generate large-scale misinformation. In this talk, we will introduce our recent works on the dual aspects of LLMs in the context of misinformation: 1) how we can leverage the strong reasoning ability of LLMs to combat misinformation? 2) the potential risks posed by LLM-generated misinformation and strategies to mitigate it. Finally, we will discuss future directions and challenges in combating misinformation in the era of LLMs.