Weidi Xu

I am a researcher with over 10 years of well-balanced experience in academia and industry. I am now with Inf, a startup company working on large language model solutions for industry applications. I received my Ph.D. degree from Peking University, under the supervision of Prof. Ying Tan. Before that, I obtained my BS. from South China University of Technology. My current research mainly focuses on logical reasoning, especially neuro-symbolic methods. Previously, I mainly worked on learning latent variable models for NLP problems. We have some research intern positions available in Inf. If you are interested in LLM and logical reasoning, please contact me: wdxu@inftech.ai.

徐威迪 / wead_hsu@163.com / Scholar / Twitter / PubMed

photo

News

Blogs

Publications

[ICLR 2024] LogicMP: A Neuro-Symbolic Approach for Encoding First-order Logic Constraints.

[KDD 2023] Learning to Discover Various Simpson’s Paradoxes.

[EMNLP 2022] Extracting Trigger-sharing Events via an Event Matrix.

[WSDM 2020] Modeling Across-Context Attention For Long-Tail Query Classification in E-commerce.

[EMNLP 2020] Question Directed Graph Attention Network for Numerical Reasoning over Text.

[SIGIR 2020] Symmetric Regularization based BERT for Pair-wise Semantic Reasoning.

[ACL 2020] SpellGCN: Incorporating Phonological and Visual Similarities into Language Models for Chinese Spelling Check.

[CVPR 2020] Data-Efficient Semi-Supervised Learning by Reliable Edge Mining.

[TNNLS 2019] Semisupervised Text Classification by Variational Autoencoder.

[Neurocomputing 2019] Semi-supervised Target-level Sentiment Analysis via Variational Autoencoder.

[CoNLL 2019] Variational Semi-supervised Aspect-term Sentiment Analysis via Transformer.

[CEC 2018] TextDream: Conditional Text Generation by Searching in the Semantic Space.

[AAAI 2017] Variational Autoencoders for Semi-supervised Text Classification.

[IJCNN 2016] Multi-digit Image Synthesis using Recurrent Conditional Variational Autoencoder.

[中国科学 2014] 基于平均增益模型的连续型 (1+1) 进化算法计算时间复杂性分析.