AI+Math Lab @ Korea


In AIML@K, we expand the frontier of artificial intelligence with mathematics through researching, teaching, and outreaching for the human race.

Meet the AIMLers

Select Publications

Broadband Ground Motion Synthesis by Diffusion Model with Minimal Condition
We present HEGGS, a novel diffusion model for earthquake ground motion simulation, exploiting fundamental dataset characteristics of seismic waveforms. HEGGS works very well in many aspects: P/S phase arrivals, envelope correlation, signal-to-noise ratio, GMPE analysis, frequency content analysis, and section plot analysis.
Broadband Ground Motion Synthesis by Diffusion Model with Minimal Condition
ABC3: Active Bayesian Causal Inference with Cohn Criteria in Randomized Experiments
We present ABC3, a Bayesian active learning algorithm for learning conditional average treatment effect in causal inference setting. Theoretical analysis links ABC3 and maximum mean discrepancy (MMD), showing the importance of balanced sampling in accurate estimation.
ABC3: Active Bayesian Causal Inference with Cohn Criteria in Randomized Experiments
Bypassing Stationary Points in Training Deep Learning Models
Gradient-descent-based optimizers are prone to slowdowns in training deep learning models, as stationary points are ubiquitous in the loss landscape of most neural networks. We present an intuitive concept of bypassing the stationary points and realize the concept into a novel method designed to actively rescue optimizers from slowdowns encountered in neural network training. The method, bypass pipeline, revitalizes the optimizer by extending the model space and later contracts the model back to its original space with function-preserving algebraic constraints. We implement the method into the bypass algorithm, verify that the algorithm shows theoretically expected behaviors of bypassing, and demonstrate its empirical benefit in regression and classification benchmarks. Bypass algorithm is highly practical, as it is computationally efficient and compatible with other improvements of first-order optimizers. In addition, bypassing for neural networks leads to new theoretical research such as model-specific bypassing and neural architecture search (NAS).
Bypassing Stationary Points in Training Deep Learning Models
SentenceLDA: Discriminative and Robust Document Representation with Sentence Level Topic Model
A subtle difference in context results in totally different nuances even for lexically identical words. On the other hand, two words can convey similar meanings given a homogeneous context. As a result, considering only word spelling information is not sufficient to obtain quality text representation. We propose SentenceLDA, a sentence-level topic model. We combine modern SentenceBERT and classical LDA to extend the semantic unit from word to sentence. By extending the semantic unit, we verify that SentenceLDA returns more discriminative document representation than other topic models, while maintaining LDA′s elegant probabilistic interpretability. We also verify the robustness of SentenceLDA by comparing the inference results on original and paraphrased texts. Additionally, we implement one possible application of SentenceLDA on corpus-level key opinion mining by applying SentenceLDA on an argumentative corpus, DebateSum.
SentenceLDA: Discriminative and Robust Document Representation with Sentence Level Topic Model
Noun-MWP: Math Word Problems Meet Noun Answers
We introduce a new type of problems for math word problem (MWP) solvers, named Noun-MWPs, whose answer is a non-numerical string containing a noun from the problem text. We present a novel method to empower existing MWP solvers to handle Noun-MWPs, and apply the method on Expression-Pointer Transformer (EPT). Our model, N-EPT, solves Noun-MWPs significantly better than other models, and at the same time, solves conventional MWPs as well. Solving Noun-MWPs may lead to bridging MWP solvers and traditional question-answering NLP models.
Noun-MWP: Math Word Problems Meet Noun Answers

Research Partners

/media/logos/과기정통부.svg
/media/logos/교육부.svg
/media/logos/기상청.svg
/media/logos/산업통상부.svg
/media/logos/삼성전자.svg
/media/logos/정보통신기획평가원.svg
/media/logos/중소기업기술정보진흥원.svg
/media/logos/중소벤처기업부.svg
/media/logos/코오롱인더스트리.svg
/media/logos/한국기상산업기술원.svg
/media/logos/한국산업기술기획평가원.svg
/media/logos/한국산업기술진흥원.svg
/media/logos/한국수력원자력.svg
/media/logos/한국연구재단.svg
/media/logos/한솔케미칼.svg

Latest

LFNO: Bridging Laplace and Fourier for Effective Operator Learning

LFNO: Bridging Laplace and Fourier for Effective Operator Learning

How to merge the goodness of Laplace Neural Operator and Fourier Neural Operator?

Bypass and Beyond: Extension–Contraction Strategies for Escaping Training Stagnation and Achieving Lossless Pruning

Bypass and Beyond: Extension–Contraction Strategies for Escaping Training Stagnation and Achieving Lossless Pruning

How well can algebraically grounded methods tackle optimization and model compression challenges in deep learning?

NeurIPS 2025: Four Workshop Papers

NeurIPS 2025: Four Workshop Papers

🎉🎉🎉🎉 AIML@K contributes four workshop papers to NeurIPS 2025!

Kudos to Taehun, Jeung-un, Suhyun, and our external collaborators!

Two New Masters Graduating from AIML@K!

Two New Masters Graduating from AIML@K!

Congratulations to our two newly minted Masters! May the Force be with you as you embark on your next adventure.

ICCV 2025: Two Workshop Papers

ICCV 2025: Two Workshop Papers

🎉🎉🎉 AIML@K contributes two workshop papers to ICCV 2025!

Kudos to Jaeheun, Jaehyuk, Yeajin, Bosung, and Suhyun!

Position: Solve Layerwise Linear Models First to Understand Neural Dynamical Phenomena (Neural Collapse, Emergence, Lazy/Rich Regime, and Grokking)

Position: Solve Layerwise Linear Models First to Understand Neural Dynamical Phenomena (Neural Collapse, Emergence, Lazy/Rich Regime, and Grokking)

Can layerwise linear models simplify complex neural network dynamics and speed up deep learning research?

Korea University Physics Prof. Alex Rothkopf's Talk

Korea University Physics Prof. Alex Rothkopf’s Talk

The talk will be about AI and quantum computing

Autoformalization and Automated Theorem Proving with Language Models

Autoformalization and Automated Theorem Proving with Language Models

Can language models translate natural language math into formal proofs (autoformalization) and more?

An Exactly Solvable Model for Emergence and Scaling Laws in the Multitask Sparse Parity Problem

An Exactly Solvable Model for Emergence and Scaling Laws in the Multitask Sparse Parity Problem

Can an analytically solvable model explain emergent behaviors and scaling laws in multitask learning, showing how new skills appear as training progresses?

ICML 2025: One Main Track Paper and Four Workshop Papers Accepted

ICML 2025: One Main Track Paper and Four Workshop Papers Accepted

🎉🎉🎉🎉🎉 AIML@K members are presenting one main track paper and four workshop papers in ICML 2025! Congratulations!

Formalizing Mathematics: Why and How

Formalizing Mathematics: Why and How

Formalization in mathematics – translating math into computer-readable language – is growing into successful projects!

Three AIML@K Students Appointed as TAs for the University-wide Python Course GECT 002

Three AIML@K Students Appointed as TAs for the University-wide Python Course GECT 002

We are proud to announce that three students from AIML@K have been selected as Teaching Assistants (TAs) for the university-wide introductory Python programming course “Software Programming Basics” (SW프로그래밍의 기초).

Two New Masters Graduating from AIML@K!

Two New Masters Graduating from AIML@K!

TWO new masters have just graduated! Congratulations, and may the force be with you!

Taehun Cha Wins the First Place from Concordia Contest @ Neurips 2024

Taehun Cha Wins the First Place from Concordia Contest @ Neurips 2024

We are glad to announce that Taehun Cha won the first place in the Concordia Contest @ NeurIPS 2024! 🎉

AIML@K 2024 Fall Workshop

The 5th AIML@K Workshop will take place on September 27, 2024

Four AIML@K Students Serve as TAs for Korea University’s First University-wide Data Science and Artificial Intelligence Course

Four AIML@K Students Serve as TAs for Korea University’s First University-wide Data Science and Artificial Intelligence Course

We are proud to announce that four AIML@K students are selected in the inaugural recruitment of highly skilled teaching assistants (TAs) for Korea University’s first university-wide course on Data Science and Artificial Intelligence. Selected through a rigorous university-wide recruitment process, these TAs are integral to the launch and success of this pioneering course.

One New Master Graduating from AIML@K!

One New Master Graduating from AIML@K!

ONE new master have just graduated! Congratulations, and may the force be with you!

AIML@K Students Receives Best Paper Award at KCC 2024!

AIML@K Students Receives Best Paper Award at KCC 2024!

We are thrilled to announce that two of our students from the AIML@K program have been awarded the prestigious Best Paper Award at Korea Computer Congress (KCC; 한국컴퓨터종합학술대회) 2024. This remarkable achievement highlights their exceptional research and dedication into their award-winning paper, titled “A Data-driven Approach for Predicting Glass Transition Temperature of Epoxy Polymers”.

AIML@K Spring 2024 Workshop

AIML@K Spring 2024 Workshop

The 4th AIML@K Workshop takes place!

Four AIML@K students serve as TAs for Korea University's first university-wide Python course

Four AIML@K students serve as TAs for Korea University’s first university-wide Python course

We are delighted to provide so many talented teaching assistants, selected via university-wide open recruiting, for Korea University’s first university-wide Python course. The course, titled “Software Programming Basics” (SW프로그래밍의 기초), is one of the mandatory courses for all first-years in Korea University.

Four New Masters Graduating from AIML@K!

Four New Masters Graduating from AIML@K!

FOUR new masters have just graduated! Congratulations, and may force be with you all!

AIML@K Ranks National Seventh in AI Grand Challenge Stage 2 Competition

AIML@K Ranks National Seventh in AI Grand Challenge Stage 2 Competition

CONGRATULATIONS to all participants of team ‘aimlk’, for securing the seventh place at the 6th AI Grand Challenge, Stage 2!

AIML@K 2023 Fall Workshop

AIML@K 2023 Fall Workshop

The 3rd AIML@K Workshop takes place!

One New Master Graduating from AIML@K!

One New Master Graduating from AIML@K!

One new master has just graduated! Congratulations, and may force be with you!

AIML@K Ranks National Top 2 in AI Grand Challenge Open Track Competition

AIML@K Ranks National Top 2 in AI Grand Challenge Open Track Competition

CONGRATULATIONS to all participants of team ‘aimlk’, for valiantly competing with AI tech companies and computer science research labs from all over South Korea and winning the second place at the 6th AI Grand Challenge, Open Track!

AIML@K 2023 Spring Workshop

AIML@K 2023 Spring Workshop

The 2nd AIML@K Workshop takes place!

AIML@K Ranks Seventh in 6th AI Grand Challenge, Stage 1 Competition

AIML@K Ranks Seventh in 6th AI Grand Challenge, Stage 1 Competition

CONGRATULATIONS to all participants of team ‘aimlk’, for ranking national top 7 in 6th AI Grand Challenge Stage 1 Competition! It was a fierce competition with AI tech companies and computer science research labs from all over South Korea.

AIML@K Ranks Sixth in 5th AI Grand Challenge, Stage 1 Competition

AIML@K Ranks Sixth in 5th AI Grand Challenge, Stage 1 Competition

CONGRATULATIONS to all participants of team ‘AIL-K’, for ranking national top sixth in 5th AI Grand Challenge Stage 1 Competition! It was our first participation to AI Grand Challenge – a series of largest government-hosted AI competition since 2017.