Abstract
Topic models have the potential to improve search and browsing by extracting useful semantic themes from web pages and other text documents. When learned topics are coherent and interpretable, they can be valuable for faceted browsing, results set diversity analysis, and document retrieval. However, when dealing with small collections or noisy text (e.g. web search result snippets or blog posts), learned topics can be less coherent, less interpretable, and less useful. To overcome this, we propose two methods to regularize the learning of topic models. Our regularizers work by creating a structured prior over words that reflect broad patterns in the external data. Using thirteen datasets we show that both regularizers improve topic coherence and interpretability while learning a faithful representation of the collection of interest. Overall, this work makes topic models more useful across a broader range of text data.
Original language | English |
---|---|
Title of host publication | Advances in Neural Information Processing Systems 23 |
Editors | Rich Zemel, John Shawe-Taylor, Peter Bartlett, Fernando Pereira and Kilian Weinb |
Place of Publication | Granada Spain |
Publisher | Neural Information Processing Systems Foundation |
Pages | 9 |
Edition | Peer Reviewed |
ISBN (Print) | 9781618395993 |
Publication status | Published - 2011 |
Event | Neural Information Processing Systems (NIPS 2011) - Granada Spain Duration: 1 Jan 2011 → … https://papers.nips.cc/paper/4487-contextual-gaussian-process-bandit-optimization |
Conference
Conference | Neural Information Processing Systems (NIPS 2011) |
---|---|
Period | 1/01/11 → … |
Other | December 13-15 2011 |
Internet address |