Simardone Camille
Camille Simardone
PhD Students, PhD Job Candidate
Department of Economics
Area(s) of Interest:
Biography
Selected Scholarships and Awards:
- Joseph-Armand Bombardier Canada Graduate Scholarships Doctoral Scholarship, Social Sciences and Humanities Research Council of Canada, 2018-present
- Graduate Scholarship, McMaster University, 2016
- Faculty of Arts and Sciences Admission Award, University of Toronto, 2014
- Geoffrey Patten Baker Memorial Scholarship, University of Toronto, 2014
- E.A. Robinson Medal - Social Sciences, University of Toronto Mississauga, 2014
- Best Student Award - B.Com., University of Toronto Mississauga, 2014
- Award for Outstanding Performance in Economics, University of Toronto Mississauga, 2014
- Award for Outstanding Performance in Management, University of Toronto Mississauga, 2014
- University of Toronto Women’s Association Scholarship, 2012
Employment:
- 2015-2016: Education Consultant, Haiti Education Team, World Bank Group
Education
- PhD Economics, McMaster University, expected 2021
- MA Economics, University of Toronto, 2015
- BCom Commerce, Finance and Economics, University of Toronto Mississauga, 2014
Teaching
Teaching Experience:
- 2020 (Spring/Summer) - Teaching Assistant, Economics of Labour Markets
- 2019 (Spring/Summer) - Teaching Assistant, Economics of Aging
- 2018 (Spring/Summer) - Instructor, Introduction to Microeconomics
- 2016 (Fall) - Teaching Assistant, Intermediate Microeconomics
- 2015 (Fall) - Teaching Assistant, Family Economics
- 2015 (Winter) - Teaching Assistant, Microeconomic Theory, Macroeconomic Theory, Quantitative Methods in Economics, and Labour Economics
- 2014 (Fall) - Teaching Assistant, Microeconomic Theory and Quantitative Methods in Economics
Research
Working Papers:
Handling Model Uncertainty: Model Averaging and Machine Learning Methods for Empirical Problems in Economics
Abstract: Uncertainty regarding parameter estimates is addressed in nearly every empirical economic paper, yet uncertainty regarding the model specification is too often ignored. While model selection is an improvement to the common practice of asserting a model in an ad hoc manner, this approach typically entertains only a small number of models with no guarantee that the data generating process is well-approximated by a model lying in this limited set. Model averaging is a leading approach for handling model uncertainty. It has not been widely adopted by empirical economists despite its considerable advantages over conventional methods, such as better predictive ability, more robust results, broad applicability, and fewer assumptions. I run Monte Carlo experiments and evaluate the relative performance of model averaging, assertion, selection, and three machine learning algorithms in terms of their mean squared error (MSE). Bagging, boosting and the post-lasso are considered because, similar to other model selection procedures, they acknowledge model uncertainty. My results show that model averaging consistently outperforms model selection-based methods in simulations where the functional form of the data generating process is known. Finally, using the National Longitudinal Survey, I use each method to estimate the returns to education and demonstrate how easily model averaging can be adopted by following a simple guide for constructing the set of candidate models. Thus, this paper highlights the advantages of using model averaging for empirical problems, with a novel emphasis on the set of candidate models that are averaged.
In Search of the Optimal Model Set: Methods for Generating Candidate Models for Model Averaging
Abstract: Model selection and model averaging are useful when there are a number of competing models that are supported by economic theory, yet it is unclear which model is the ‘best’ model among them. Selecting the set of candidate models to be used in either model selection or model averaging is an important step because the resulting estimator will inherit properties from the candidate models; however, it is a step that is often overlooked. Current practice suggests writing down a handful of parametric models with a common parameter of interest, estimating each model individually, then either using a model selection criterion to select one model from the set of candidate models or computing the model average weights in order to construct the model average estimator (a weighted average of the estimates from each individual model). However, these approaches often rely on subjective decisions on behalf of the researcher, such as the functional form specification of each model. This paper investigates different approaches for constructing a set of candidate models with desirable properties to be used in model averaging. Ideally, the candidate model set should balance model complexity, breadth (covering a wide range of potential data generating processes), and computational efficiency. Three promising approaches – model screening, recursive partitioning-based algorithms, and methods that average over non-parametric models – are discussed. Additionally, heuristics for empirical researchers to employ the recommended approach for constructing the candidate model set in their own work are described in detail.
Research Experience:
- 2017-present - Research Assistant, Professor Jeffrey S. Racine, McMaster University
- 2017 - Research Assistant, Public Economic Data Analysis Laboratory (PEDAL), McMaster University
- 2017 - Research Assistant, Statistics Canada Research Data Centre, McMaster University
- 2015 - Research Assistant, Professor Phillip Oreopoulos, University of Toronto
- 2015 - Research Assistant, Professor Loren Brandt, University of Toronto
- 2015 - Research Assistant, Professor Nicholas Li, University of Toronto