TY - JOUR
T1 - Discovering Fine-grained Sentiment in Suicide Notes
JF - Book: Biomedical Informatics Insights
Y1 - 2012
A1 - Wenbo Wang
A1 - Lu Chen
A1 - Ming Tan
A1 - Shaojun Wang
A1 - Amit Sheth
KW - Emotion Identification
KW - Sentiment Analysis
KW - Suicide Note
AB - This paper presents our solution for the i2b2 sentiment classification challenge. Our hybrid system consists of machine learning and rule-based classifiers. For the machine learning classifier, we investigate a variety of lexical, syntactic and knowledge-based features, and show how much these features contribute to the performance of the classifier through experiments. For the rule-based classifier, we propose an algorithm to automatically extract effective syntactic and lexical patterns from training examples. The experimental results show that the rule-based classifier outperforms the baseline machine learning classifier using unigram features. By combining the machine learning classifier and the rule-based classifier, the hybrid system gains a better trade-off between precision and recall, and yields the highest micro-averaged F-measure (0.5038), which is better than the mean (0.4875) and median (0.5027) micro-average F-measures among all participating teams.
ER -
TY - CONF
T1 - Extracting Diverse Sentiment Expressions with Target-dependent Polarity from Twitter
T2 - International AAAI Conference on Weblogs and Social Media (ICWSM)
Y1 - 2012
A1 - Lu Chen
A1 - Wenbo Wang
A1 - Meenakshi Nagarajan
A1 - Shaojun Wang
A1 - Amit Sheth
KW - Microblog
KW - Opinion Mining
KW - Optimization
KW - Sentiment Analysis
KW - Sentiment Expression
KW - Sentiment Extraction
KW - Social Media
AB - The problem of automatic extraction of sentiment expressions from informal text, as in microblogs such as tweets is a recent area of investigation. Compared to formal text, such as in product reviews or news articles, one of the key challenges lies in the wide diversity and informal nature of sentiment expressions that cannot be trivially enumerated or captured using predefined lexical patterns. In this work, we present an optimization-based approach to automatically extract sentiment expressions for a given target (e.g., movie, or person) from a corpus of unlabeled tweets. Specifically, we make three contributions: (i) we recognize a diverse and richer set of sentiment-bearing expressions in tweets, including formal and slang words/phrases, not limited to pre-specified syntactic patterns; (ii) instead of associating sentiment with an entire tweet, we assess the target-dependent polarity of each sentiment expression. The polarity of sentiment expression is determined by the nature of its target; (iii) we provide a novel formulation of assigning polarity to a sentiment expression as a constrained optimization problem over the tweet corpus. Experiments conducted on two domains, tweets mentioning movie and person entities, show that our approach improves accuracy in comparison with several baseline methods, and that the improvement becomes more prominent with increasing corpus sizes.
JA - International AAAI Conference on Weblogs and Social Media (ICWSM)
PB - In Proceedings of the 6th International AAAI Conference on Weblogs and Social Media (ICWSM)
CY - Dublin, Ireland
ER -
TY - ABST
T1 - Beyond Positive/Negative Classification: Automatic Extraction of Sentiment Clues from Microblogs
Y1 - 2011
A1 - Lu Chen
A1 - Wenbo Wang
A1 - Meenakshi Nagarajan
A1 - Shaojun Wang
A1 - Amit Sheth
KW - Optimization and Opinion Mining and Sentiment Analysis and Sentiment Extraction
AB - Microblogging provides a large volume of text for learning and understanding people's sentiments on a variety of topics. Much of the current work on sentiment analysis of microblogs (e.g., tweets) focuses on document level polarity. However, identifying sentiment clues with respect to specific targets (e.g., named entities) can be more useful than pure document polarity results. For example, sentiment clues such as 'must see', 'awesome', 'rate 5 stars' (in the movie domain) are much more meaningful than the polarities of tweets only. Previous attempts at single-word sentiment clue extraction from formal text will not suffice for extracting multi-word sentiment phrases. Single words 'must' and 'see' do not separately convey polarity, but their combination 'must see' expresses strong positive sentiment towards a movie target. Another issue with identifying sentiment clues is identifying informal sentiment expressions, such as misspellings ('kool'), abbreviations ('wtf') and slangs ('da bomb'). In this paper, we propose an approach for automatically extracting both single-word and multi-word sentiment clues. Such clues can include both traditional and slang expressions. We also present a mechanism for assessing their target-specific polarities from an unlabeled microblog corpus. Our approach first leverages traditional and slang subjective lexicons to generate candidate sentiment clues given some specific target. It then incorporates inter-clue relations from corpora into an optimization model to estimate the probability of a clue denoting positive/negative sentiment. Experiments using microblog data sets on two different domains -- movie and person -- show that the proposed approach can effectively 1) extract single-word as well as phrase sentiment clues, 2) identify both traditional and slang sentiment clues, and 3) determine their target-specific polarities. We also demonstrate how the proposed approach is superior in comparison with several baseline methods.
ER -
TY - CONF
T1 - A Rate Distortion Approach for Semi-Supervised Conditional Random Fields
Y1 - 2010
A1 - G. Haffari
A1 - Y. Wang
A1 - Shaojun Wang
A1 - G. Mori
KW - semi-supervised learning
AB - We propose a novel information theoretic approach for semi-supervised learning of conditional random fields that defines a training objective to combine the conditional likelihood on labeled data and the mutual information on unlabeled data. In contrast to previous minimum conditional entropy semi-supervised discriminative learning methods, our approach is grounded on a more solid foundation, the rate distortion theory in information theory. We analyze the tractability of the framework for structured prediction and present a convergent variational training algorithm to defy the combinatorial explosion of terms in the sum over label configurations. Our experimental results show the rate distortion approach outperforms standard l2 regularization, minimum conditional entropy regularization as well as maximum conditional entropy regularization on both multi-class classifcation and sequence labeling problems.
PB - Advances in Neural Information Processing Systems
ER -
TY - CONF
T1 - Information Theoretic Regularization for Semi-Supervised Boosting
T2 - Knowledge Discovery and Data Mining - KDD2009
Y1 - 2009
A1 - Lei Zheng
A1 - Yan Liu
A1 - Shaojun Wang
AB - We present novel semi-supervised boosting algorithms that incrementally build linear combinations of weak classifiers through generic functional gradient descent using both labeled and unlabeled training data. Our approach is based on extending information regularization framework to boosting,bearing loss functions that combine log loss on labeled data with the information-theoretic measures to encode unlabeled data. Even though the information-theoretic regularization terms make the optimization non-convex, we propose simple sequential gradient descent optimization algorithms, and obtain impressively improved results on synthetic, benchmark and real world tasks over supervised boosting algorithms which use the labeled data alone and a state-of-the-art semi-supervised boosting algorithm
JA - Knowledge Discovery and Data Mining - KDD2009
CY - Paris, France
ER -
TY - CONF
T1 - Monetizing User Activity on Social Networks - Challenges and Experiences
T2 - Monetizing User Activity on Social Networks - Challenges and Experiences
Y1 - 2009
A1 - Shaojun Wang
A1 - Meenakshi Nagarajan
A1 - Kamal Baid
A1 - Amit Sheth
JA - Monetizing User Activity on Social Networks - Challenges and Experiences
ER -
TY - CONF
T1 - Monetizing User Activity on Social Networks - Challenges and Experiences
Y1 - 2009
A1 - Meenakshi Nagarajan
A1 - Kamal Baid
A1 - Amit Sheth
A1 - Shaojun Wang
PB - Beyond Search: Semantic Computing and Internet Economics 2009 Workshop
ER -
TY - CONF
T1 - Boosting with Incomplete Information
T2 - Boosting with Incomplete Information
Y1 - 2008
A1 - Shaojun Wang
A1 - G. Haffari
A1 - F. Jiao
A1 - Y. Wang
A1 - G. Mori
JA - Boosting with Incomplete Information
ER -
TY - CONF
T1 - Constrained Classification on Structured Data
T2 - Constrained Classification on Structured Data
Y1 - 2008
A1 - C. Lee
A1 - Shaojun Wang
A1 - M. Brown
A1 - A. Murtha
A1 - R. Greiner
JA - Constrained Classification on Structured Data
ER -
TY - ABST
T1 - Monetizing User Activity on Social Networks
Y1 - 2008
A1 - Amit Sheth
A1 - Meenakshi Nagarajan
A1 - Kamal Baid
A1 - Shaojun Wang
KW - Social Networks and Monetization and User activity and Computational Advertising and O-topic content and Intents
AB - In this work, we investigate techniques to monitize user activity on public forums, marketplaces and groups on social network sites. Our approach involves (a) identifying the monetization potential of user posts and (b) eliminating o_- topic content in monetizable posts to use the most relevant keywords for advertising. Our _rst user study involving 30 users and data from MySpace and Facebook, shows that 52% of ad impressions shown after using our system were more targeted compared to the 30% relevant impressions generated without using our system. A second smaller study suggests that pro_le ads that are based on user activity generate more interest than ads solely based on pro_le information.
ER -
TY - CONF
T1 - Segmenting Brain Tumors Using Pseudo-Conditional Random Fields
T2 - Segmenting Brain Tumors Using Pseudo-Conditional Random Fields
Y1 - 2008
A1 - M. Brown
A1 - A. Murtha
A1 - C. Lee
A1 - Shaojun Wang
A1 - R. Greiner
JA - Segmenting Brain Tumors Using Pseudo-Conditional Random Fields
ER -
TY - ABST
T1 - Targeted Content Delivery for Social Media Content
Y1 - 2008
A1 - Amit Sheth
A1 - Meenakshi Nagarajan
A1 - Kamal Baid
A1 - Shaojun Wang
KW - Mutual Information and Contextual keywords and Contextual Content Delivery and Social Media Content
AB - Spotting contextually relevant keywords is fundamental to effective content suggestions on the Web. In this regard, misspellings, entity variations and off-topic discussions in content from Social Media pose unique challenges. Here, we present an algorithm that assists content delivery systems by identifying contextually relevant keywords and eliminating off-topic keywords. A preliminary user study over data from MySpace and Facebook clearly suggests the usefulness of our work in delivering more targeted content suggestions.
ER -
TY - CONF
T1 - Unsupervised Discovery of Compound Entities for Relationship Extraction
T2 - Unsupervised Discovery of Compound Entities for Relationship Extraction
Y1 - 2008
A1 - Shaojun Wang
A1 - Cartic Ramakrishnan
A1 - Pablo N. Mendes
A1 - Amit Sheth
AB - In this paper we investigate unsupervised population of a biomedical ontology via information extraction from biomedical literature. Relationships in text seldom connect simple entities. We therefore focus on identifying compound entities rather than mentions of simple entities. We present a method based on rules over grammatical dependency structures for unsupervised segmentation of sentences into compound entities and relationships. We complement the rule-based approach with a statistical component that prunes structures with low information content, thereby reducing false positives in the prediction of compound entities, their constituents and relationships. The extraction is manually evaluated with respect to the UMLS Semantic Network by analyzing the conformance of the extracted triples with the corresponding UMLS relationship type definitions.
JA - Unsupervised Discovery of Compound Entities for Relationship Extraction
ER -
TY - JOUR
T1 - Implicit Online Learning with Kernels
Y1 - 2007
A1 - L. Cheng
A1 - S. Vishwanathan
A1 - D. Schuurmans
A1 - Shaojun Wang
A1 - Terry Caelli
AB - We present two new algorithms for online learning in reproducing kernel Hilbert spaces. Our first algorithm, ILK (implicit online learning with kernels), employs a new, implicit update technique that can be applied to a wide variety of convex loss functions. We then introduce a bounded memory version, SILK (sparse ILK), that maintains a compact representation of the predictor without compromising solution quality, even in non-stationary environments. We prove loss bounds and analyze the convergence rate of both. Experimental evidence shows that our proposed algorithms outperform current methods on synthetic and real data.
ER -
TY - JOUR
T1 - Learning to Model Spatial Dependency: Semi-Supervised Discriminative Random Fields
Y1 - 2007
A1 - F. Jiao
A1 - D. Schuurmans
A1 - R. Greiner
A1 - C. Lee
A1 - Shaojun Wang
ER -
TY - JOUR
T1 - Almost Sure Convergence of Titterington's Recursive Estimator for Finite Mixture Models
JF - Statistics & Probability Letters
Y1 - 2006
A1 - Y. Zhao
A1 - Shaojun Wang
AB - Titterington proposed a recursive parameter estimation algorithm for finite mixture models. However, due to the well known problem of singularities and multiple maximum, minimum and saddle points that are possible on the likelihood surfaces, convergence analysis has seldom been made in the past years. In this paper, under mild conditions, we show the global convergence of Titterington's recursive estimator and its MAP variant for mixture models of full regular exponential family.
ER -
TY - JOUR
T1 - Almost Sure Convergence of Titterington's Recursive Estimator for Finite Mixture Models
JF - Elsevier
Y1 - 2006
A1 - Shaojun Wang
A1 - Y. Zhao
AB - Titterington proposed a recursive parameter estimation algorithm for finite mixture models. However, due to the well known problem of singularities and multiple maximum, minimum and saddle points that are possible on the likelihood surfaces, convergence analysis has seldom been made in the past years. In this paper, under mild conditions, we show the global convergence of Titterington's recursive estimator and its MAP variant for mixture models of full regular exponential family.
ER -
TY - CONF
T1 - An Online Discriminative Approach to Background Subtraction
T2 - An Online Discriminative Approach to Background Subtraction
Y1 - 2006
A1 - S. Vishwanathan
A1 - T. Caelli
A1 - L. Cheng
A1 - D. Schuurmans
A1 - Shaojun Wang
JA - An Online Discriminative Approach to Background Subtraction
ER -
TY - CONF
T1 - Semi-Supervised Conditional Random Fields for Improved Sequence Segmentation and Labeling
T2 - Semi-Supervised Conditional Random Fields for Improved Sequence Segmentation and Labeling
Y1 - 2006
A1 - Shaojun Wang
A1 - R. Greiner
A1 - F. Jiao
A1 - D. Schuurmans
A1 - C. Lee
JA - Semi-Supervised Conditional Random Fields for Improved Sequence Segmentation and Labeling
ER -
TY - CONF
T1 - Stochastic Analysis of Lexical and Semantic Enhanced Structural Language Model
T2 - 8th International Colloquium on Grammatical Inference (ICGI)
Y1 - 2006
A1 - Shaojun Wang
A1 - Shaomin Wang
A1 - Li Cheng
A1 - Russell Greiner
A1 - Dale Schuurmans
JA - 8th International Colloquium on Grammatical Inference (ICGI)
CY - Tokyo, Japan
ER -
TY - CONF
T1 - Using Query-Speci_c Variance Estimates to Combine Bayesian Classi_ers
T2 - Using Query-Speci_c Variance Estimates to Combine Bayesian Classi_ers
Y1 - 2006
A1 - R. Greiner
A1 - Shaojun Wang
A1 - C. Lee
JA - Using Query-Speci_c Variance Estimates to Combine Bayesian Classi_ers
ER -
TY - JOUR
T1 - Combining Statistical Language Models via the Latent Maximum Entropy Principle
Y1 - 2005
A1 - F. Peng
A1 - Y. Zhao
A1 - Shaojun Wang
A1 - D. Schuurmans
AB - We present a unified probabilistic framework for statistical language modeling which can simultaneously incorporate various aspects of natural language, such as local word interaction, syntactic structure and semantic document information. Our approach is based on a recent statistical inference principle we have proposed-the latent maximum entropy principle-which allows relationships over hidden features to be effectively captured in a unifiedmodel. Our work extends previous research on maximum entropy methods for language modeling, which only allow observed features to be modeled. The ability to conveniently incorporate hidden variables allows us to extend the expressiveness of language models while alleviating the necessity of pre-processing the data to obtain explicitly observed features.We describe efficient algorithms for marginalization, inference and normalization in our extended models. We then use these techniques to combine two standard forms of language models: local lexical models (Markov N-gram models) and global document-level semantic models (probabilistic latent semantic analysis). Our experimental results on the Wall Street Journal corpus show that we obtain a 18.5% reduction in perplexity compared to the baseline tri-gram model with Good-Turing smoothing.
ER -
TY - CONF
T1 - Exploiting Syntactic, Semantic and Lexical Regularities in Language Modeling via Directed Markov Random Fields
T2 - Exploiting Syntactic, Semantic and Lexical Regularities in Language Modeling via Directed Markov Random Fields
Y1 - 2005
A1 - L. Cheng
A1 - D. Schuurmans
A1 - R. Greiner
A1 - Shaojun Wang
A1 - Shaojun Wang
JA - Exploiting Syntactic, Semantic and Lexical Regularities in Language Modeling via Directed Markov Random Fields
ER -
TY - CONF
T1 - Variational Bayesian Image Modelling
T2 - Variational Bayesian Image Modelling
Y1 - 2005
A1 - F. Jiao
A1 - Shaojun Wang
A1 - D. Schuurmans
A1 - L. Cheng
JA - Variational Bayesian Image Modelling
ER -
TY - JOUR
T1 - Augmenting Naive Bayes Text Classi_er Using Statistical N-Gram Language Modeling
Y1 - 2004
A1 - F. Peng
A1 - Shaojun Wang
A1 - D. Schuurmans
ER -
TY - CONF
T1 - Exploiting Syntactic, Semantic and Lexical Regularities in Language Modeling via Directed Markov Random Fields
T2 - International Symposium on Chinese Spoken Language Processing (ISCSLP)
Y1 - 2004
A1 - Shaojun Wang
A1 - Shaomin Wang
A1 - Russell Greiner
A1 - Dale Schuurmans
A1 - Li Cheng
JA - International Symposium on Chinese Spoken Language Processing (ISCSLP)
CY - Singapore, Singapore
ER -
TY - JOUR
T1 - Learning Mixture Models with the Regularized Latent Maximum Entropy Principle
Y1 - 2004
A1 - D. Schuurmans
A1 - F. Peng
A1 - Y. Zhao
A1 - Shaojun Wang
AB - We present a new approach to estimating mixture models based on a new inference principle we have proposed: the latent maximum entropy principle (LME). LME is different both from Jaynes' maximum entropy principle and from standard maximum likelihood estimation. We demonstrate the LME principle by deriving new algorithms for mixture model estimation, and show how robust new variants of the EM algorithm can be developed. Our experiments show that estimation based on LME generally yields better results than maximum likelihood estimation, particularly when inferring latent variable models from small amounts of data.
ER -
TY - CONF
T1 - Boltzmann Machine Learning with the Latent Maximum Entropy Principle
T2 - Boltzmann Machine Learning with the Latent Maximum Entropy Principle
Y1 - 2003
A1 - Y. Zhao
A1 - Shaojun Wang
A1 - D. Schuurmans
A1 - F. Peng
JA - Boltzmann Machine Learning with the Latent Maximum Entropy Principle
ER -
TY - JOUR
T1 - Language and Task Independent Text Categorization Using Character Level N-Gram Language Models
Y1 - 2003
A1 - Shaojun Wang
A1 - F. Peng
A1 - D. Schuurmans
ER -
TY - CONF
T1 - Language Independent Automated Authorship Attribution with Character Level N-Gram Language Modeling
T2 - Language Independent Automated Authorship Attribution with Character Level N-Gram Language Modeling
Y1 - 2003
A1 - F. Peng
A1 - Shaojun Wang
A1 - D. Schuurmans
JA - Language Independent Automated Authorship Attribution with Character Level N-Gram Language Modeling
ER -
TY - CONF
T1 - Latent Maximum Entropy Approach for Semantic N-gram Language Modeling
Y1 - 2003
A1 - F. Peng
A1 - Shaojun Wang
A1 - D. Schuurmans
AB - In this paper, we describe a unified probabilistic framework for statistical language modeling--the latent maximum entropy principle--which can effectively incorporate various aspects of natural language, such as local word interaction, syntactic structure and semantic document information. Unlike previous work on maximum entropy methods for language modeling, which only allow explicit features to be modeled, our framework also allows relationships over hidden features to be captured, resulting in a more expressive language model. We describe efficient algorithms for marginalization, inference and normalization in our extended models. We then present promising experimental results for our approach on the Wall Street Journal corpus.
ER -
TY - CONF
T1 - Learning Continuous Latent Variable Models with Bregman Divergences
T2 - Learning Continuous Latent Variable Models with Bregman Divergences
Y1 - 2003
A1 - Shaojun Wang
A1 - D. Schuurmans
JA - Learning Continuous Latent Variable Models with Bregman Divergences
ER -
TY - CONF
T1 - Learning Latent Variable Models with Bregman Divergences
T2 - Learning Latent Variable Models with Bregman Divergences
Y1 - 2003
A1 - Shaojun Wang
A1 - D. Schuurmans
JA - Learning Latent Variable Models with Bregman Divergences
ER -
TY - CONF
T1 - Learning Mixture Models with the Latent Maximum Entropy Principle
T2 - Learning Mixture Models with the Latent Maximum Entropy Principle
Y1 - 2003
A1 - Y. Zhao
A1 - Shaojun Wang
A1 - F. Peng
A1 - D. Schuurmans
JA - Learning Mixture Models with the Latent Maximum Entropy Principle
ER -
TY - CONF
T1 - Semantic N-gram Language Modeling with the Latent Maximum Entropy Principle
T2 - Semantic N-gram Language Modeling with the Latent Maximum Entropy Principle
Y1 - 2003
A1 - D. Schuurmans
A1 - F. Peng
A1 - Y. Zhao
A1 - Shaojun Wang
JA - Semantic N-gram Language Modeling with the Latent Maximum Entropy Principle
ER -
TY - CONF
T1 - Text Classification in Asian Languages Without Word Segmentation
T2 - Text Classification in Asian Languages Without Word Segmentation
Y1 - 2003
A1 - D. Schuurmans
A1 - Shaojun Wang
A1 - F. Peng
A1 - X. Huang
JA - Text Classification in Asian Languages Without Word Segmentation
ER -
TY - CONF
T1 - The Latent Maximum Entropy Principle
T2 - The Latent Maximum Entropy Principle
Y1 - 2002
A1 - Shaojun Wang
A1 - Y. Zhao
A1 - D. Schuurmans
A1 - R. Rosenfeld
JA - The Latent Maximum Entropy Principle
ER -
TY - CONF
T1 - Predicting Oral Reading Miscues
T2 - Predicting Oral Reading Miscues
Y1 - 2002
A1 - V. Winter
A1 - Shaojun Wang
A1 - J. Beck
A1 - J. Mostow
A1 - B. Tobin
JA - Predicting Oral Reading Miscues
ER -
TY - CONF
T1 - Almost Sure Convergence of Titterington's Recursive Estimator for Finite Mixture Models
T2 - Almost Sure Convergence of Titterington's Recursive Estimator for Finite Mixture Models
Y1 - 2001
A1 - Y. Zhao
A1 - Shaojun Wang
AB - Titterington proposed a recursive parameter estimation algorithm for finite mixture models. However, due to the well known problem of singularities and multiple maximum, minimum and saddle points that are possible on the likelihood surfaces, convergence analysis has seldom been made in the past years. In this paper, under mild conditions, we show the global convergence of Titterington's recursive estimator and its MAP variant for mixture models of full regular exponential family.
JA - Almost Sure Convergence of Titterington's Recursive Estimator for Finite Mixture Models
ER -
TY - CONF
T1 - Almost Sure Convergence of Titterington's Recursive Estimator for Finite Mixture Models
Y1 - 2001
A1 - Shaojun Wang
A1 - Y. Zhao
ER -
TY - CONF
T1 - Latent Maximum Entropy Principle for Statistical Language Modeling
Y1 - 2001
A1 - Shaojun Wang
A1 - Y. Zhao
A1 - R. Rosenfeld
ER -
TY - JOUR
T1 - On-Line Bayesian Tree-Structured Transformation of HMMs with Optimal Model Selection for Speaker Adaptation
Y1 - 2001
A1 - Y. Zhao
A1 - Shaojun Wang
AB - This paper presents a new recursive Bayesian learning approach for transformation parameter estimation in speaker adaptation. Our goal is to incrementally transform or adapt a set of hidden Markov model (HMM) parameters for a new speaker and gain large performance improvement from a small amount of adaptation data. By constructing a clustering tree of HMM Gaussian mixture components, the linear regression (LR) or affine transformation parameters for HMM Gaussian mixture components are dynamically searched. An online Bayesian learning technique is proposed for recursive maximum a posteriori (MAP) estimation of LR and affine transformation parameters. This technique has the advantages of being able to accommodate flexible forms of transformation functions as well as a priori probability density functions (pdfs). To balance between model complexity and goodness of fit to adaptation data, a dynamic programming algorithm is developed for selecting models using a Bayesian variant of the 'minimum description length' (MDL) principle. Speaker adaptation experiments with a 26-letter English alphabet vocabulary were conducted, and the results confirmed effectiveness of the online learning framework.
ER -
TY - CONF
T1 - Recursive Estimation of Time-Varying Environments for Robust Speech Recognition
T2 - Recursive Estimation of Time-Varying Environments for Robust Speech Recognition
Y1 - 2001
A1 - K. Yen
A1 - Shaojun Wang
A1 - Y. Zhao
JA - Recursive Estimation of Time-Varying Environments for Robust Speech Recognition
ER -
TY - CONF
T1 - On-Line Bayesian Speaker Adaptation By Using Tree-Structured Transformation and Robust Priors
T2 - On-Line Bayesian Speaker Adaptation By Using Tree-Structured Transformation and Robust Priors
Y1 - 2000
A1 - Shaojun Wang
A1 - Y. Zhao
JA - On-Line Bayesian Speaker Adaptation By Using Tree-Structured Transformation and Robust Priors
ER -
TY - CONF
T1 - Optimal On-Line Bayesian Model Selection for Speaker Adaptation
T2 - Optimal On-Line Bayesian Model Selection for Speaker Adaptation
Y1 - 2000
A1 - Shaojun Wang
A1 - Y. Zhao
JA - Optimal On-Line Bayesian Model Selection for Speaker Adaptation
ER -
TY - CONF
T1 - On-Line Bayesian Tree-Structured Transformation of Hidden Markov Models for Speaker Adaptation
T2 - On-Line Bayesian Tree-Structured Transformation of Hidden Markov Models for Speaker Adaptation
Y1 - 1999
A1 - Shaojun Wang
A1 - Y. Zhao
JA - On-Line Bayesian Tree-Structured Transformation of Hidden Markov Models for Speaker Adaptation
ER -
TY - CONF
T1 - A Unifed Framework for Recursive Maximum Likelihood Estimation of Hidden Markov Models
T2 - A Unifed Framework for Recursive Maximum Likelihood Estimation of Hidden Markov Models
Y1 - 1999
A1 - Shaojun Wang
A1 - Y. Zhao
JA - A Unifed Framework for Recursive Maximum Likelihood Estimation of Hidden Markov Models
ER -
TY - CONF
T1 - On Convergence of Maximum Likelihood Estimation of Binary HMMs by EM Algorithm
T2 - On Convergence of Maximum Likelihood Estimation of Binary HMMs by EM Algorithm
Y1 - 1998
A1 - Shaojun Wang
A1 - M. Li
A1 - Y. Zhao
JA - On Convergence of Maximum Likelihood Estimation of Binary HMMs by EM Algorithm
ER -
TY - JOUR
T1 - Probabilistic Production Costing of Hydro and Pumped Storage Units under Chronological Load Curve
Y1 - 1997
A1 - Shaojun Wang
A1 - Q. Xia
A1 - N. Xiang
ER -
TY - JOUR
T1 - Probabilistic Marginal Cost Curve and Its Applications
Y1 - 1995
A1 - S. Shahidehpour
A1 - Shaojun Wang
A1 - N. Xiang
ER -
TY - JOUR
T1 - Short-Term Generation Scheduling with Transmission and Environmental Constraints Using an Augmented Lagrangian Relaxation
Y1 - 1995
A1 - S. Shahidehpour
A1 - Shaojun Wang
A1 - S. Mokhtari
A1 - D. Kirschen
A1 - G. Irisarri
ER -
TY - JOUR
T1 - Probabilistic Production Costing under Chronological Load Curve
Y1 - 1994
A1 - N. Xiang
A1 - Q. Xia
A1 - Shaojun Wang
ER -