TabLLM: Few-shot Classification of Tabular Data with Large Language Models

Abstract

We study the application of large language models to zero-shot and few-shot classification of tabular data. We prompt the large language model with a serialization of the tabular data to a natural-language string, together with a short description of the classification problem. In the few-shot setting, we fine-tune the large language model using some labeled examples. We evaluate several serialization methods including templates, table-to-text models, and large language models. Despite its simplicity, we find that this technique outperforms prior deep-learning-based tabular classification methods on several benchmark datasets. In most cases, even zero-shot classification obtains non-trivial performance, illustrating the method’s ability to exploit prior knowledge encoded in large language models. Unlike many deep learning methods for tabular datasets, this approach is also competitive with strong traditional baselines like gradient-boosted trees, especially in the very-few-shot setting.

Publication
Proceedings of International Conference on Artificial Intelligence and Statistics (AISTATS)
Stefan Hegselmann
Stefan Hegselmann
Visiting Student
Alejandro Buendia
Alejandro Buendia
Research Engineer
Hunter Lang
Hunter Lang
PhD Student

Hunter’s research focuses on understanding and improving the performance of machine learning algorithms in the wild, with particular applications in MAP inference for graphical models, stochastic optimization, and weak supervision.

Monica Agrawal
Monica Agrawal
PhD Student

Incoming Asst Prof Duke

David Sontag
David Sontag
Professor of EECS

My research focuses on advancing machine learning and artificial intelligence, and using these to transform health care.

Related