In this talk, we will discuss autoformalization and automated theorem proving. Autoformalization refers to the task of automatically translating natural language into a formal language. While using proof assistants to verify proofs is effective, it requires users to write proofs in formal language, which can be challenging and time-consuming even for mathematicians, creating a significant barrier to entry. We introduce recent advances in autoformalization using language models. Although the current quality of translation is far from perfect and often fails with complex statements, fine-tuning large language models shows promising results in autoformalization. We will also discuss automated theorem proving (ATP), which generates proofs for theorems in formal language. Our focus is on ATP using language models, opening new pathways for theorem proving. Recent progress in ATP is driven by the availability of extensive mathematical data and advanced Lean tactics. Through the integration of machine learning and proof assistants, ATP has successfully solved high-school-level Olympiad math problems. However, solving college-level math problems remains a challenge for ATP. We will explore the current status and future directions of ATP.