This is the schedule for the 2nd day (9th January 2026, Friday) of the workshop. To view the schedule for the previous day, please visit the Schedule for 8th January 2026 page.
TIP
The content of this page is subject to change. Please check back regularly for updates. The last update was on 7th January 2026.
Please click the links below to view the detailed schedule for each session. The schedule is in Hong Kong Time (UTC+8).
Please refer to the Speakers page for detailed information about the keynote speaker. The moderator of this session is Xinghua Zheng from the Hong Kong University of Science and Technology.
Key 2
Factor Informed Double Deep Learning For Average Treatment Effect Estimation
Jianqing Fan (Princeton University)
9:00 - 10:00 AM, 9 Jan., LT-15
Abstract We investigate the problem of estimating the average treatment effect (ATE) under a very general setup where the covariates can be high-dimensional, highly correlated, and can have sparse nonlinear effects on the propensity and outcome models. We present the use of a Double Deep Learning strategy for estimation, which involves combining recently developed factor-augmented deep learning-based estimators, FAST-NN, for both the response functions and propensity scores to achieve our goal. By using FAST-NN, our method can select variables that contribute to propensity and outcome models in a completely nonparametric and algorithmic manner and adaptively learn low-dimensional function structures through neural networks. Our proposed novel estimator, FIDDLE (Factor Informed Double Deep Learning Estimator), estimates ATE based on the framework of augmented inverse propensity weighting AIPW with the FAST-NN-based response and propensity estimates. FIDDLE consistently estimates ATE even under model misspecification, and is flexible to also allow for low-dimensional covariates. Our method achieves semiparametric efficiency under a very flexible family of propensity and outcome models. We present extensive numerical studies on synthetic and real datasets to support our theoretical guarantees and establish the advantages of our methods over other traditional choices, especially when the data dimension is large. (Joint work with Soham Jana, Sanjeev Kulkarni, and Qishuo Yin)
The chair of this session is Shakeel Gavioli-Akilagun from the City University of Hong Kong.
S 10
Hypergraph Embeddings: A Novel Approach with Increasing Dimensions
Binyan Jiang (Hong Kong Polytechnic University)
10:30 - 11:00 AM, 9 Jan., LT-15
Abstract Hypergraphs generalize graphs by allowing each edge, known as a hyperedge, to connect multiple vertices. Despite their significant advantages, hypergraph embeddings have been underexplored compared to pairwise graphs due to the inherent complexity of the hypergraph topologies. Existing approaches often rely on fixed-dimensional embeddings, where the relative closeness among nodes is fixed, regardless of hyperedge order. This fixed-dimensional setting encourages heredity among hyperedges of different orders and fails to offer a flexible projection to capture the complex relationships among nodes. In this project, we propose a novel increasing dimensional embedding approach that jointly considers sparsity and node heterogeneity, including both degree heterogeneity and node heterogeneity in the latent dependencies among hyperedges of different orders. The proposed framework offers a more flexible approach to capturing diverse features of hypergraphs and could potentially provide new insights in different real applications.
S 11
Prediction-powered Linear Regression: a Balance between Interpretation and Prediction
Xinyu Zhang (Chinese Academy of Sciences)
11:00 - 11:30 AM, 9 Jan., LT-15
Abstract Machine learning can rapidly generate numerous predicted labels using complex prediction techniques, emerging as an efficient and low-cost labeling solution. However, most machine learning algorithms lack interpretability. This study adopts linear regression as the baseline model and proposes a prediction-powered prediction approach to leverage unlabeled data to enhance prediction performance while ensuring model interpretability. In the proposed approach, we incorporate model averaging to address the uncertainty caused by model, power tuning parameter, and machine learning algorithm selection. Simulation and applications demonstrate its promising performance.
S 12
Network Analysis of Business Cycle Synchronisation
Jia Chen (University of Macau)
11:30 - 12:00 Noon, 9 Jan., LT-15
Abstract To investigate the fundamental relationship between business cycle synchronisation (BCS) and trade/finance intensities, we develop the simultaneous equation panel data model that accommodates all the key elements: simultaneity, spatial spillovers, global shocks and parameter heterogeneity. We propose the consistent CCEX-2SLS estimator and conduct a spatial network analysis to investigate the direct and indirect impacts of trade/finance intensities on the BCS across country-pairs or the selected clusters. We apply the proposed approach to the dataset consisting of the 136 pairs of the 17 OECD countries over 1995Q1-2019Q4, and convincingly unveil: (i) the individual CCEX-2SLS estimation results demonstrate the importance of explicitly taking parameter heterogeneity into account; (i) almost 90% of the samples belong to cases where the direct and indirect effects of trade/finance intensities on BCS display opposite signs; (iii) we observe the surprisingly negative total effect of trade intensity on BSC and negative spillovers of trade and financial intensities on BCS. This implies that the optimal currency area (OCA) criteria have not yet fulfilled in the EU and suggests that policymakers coordinate across borders and mitigate adverse economic fluctuations to facilitate trade/financial intensities for improved BCS.
The chair of this session is Yingying Li from the Hong Kong University of Science and Technology.
E 10
Beyond the Mean: Limit Theory and Tests for Infinite-Mean Autoregressive Conditional Durations
Giuseppe Cavaliere (University of Bologna)
10:30 - 11:00 AM, 9 Jan., LT-16
Abstract Integrated ACD models are natural counterparts to integrated GARCH for financial returns, but their asymptotic theory remains incomplete. Key difficulties are that integrated ACD implies durations with infinite mean and that standard asymptotics fail because the number of durations in a fixed time span is random. We develop a unified asymptotic theory for the (quasi-) maximum likelihood estimator that covers both standard and integrated ACD models, and use it to build a new hypothesis testing framework to determine whether durations have finite or infinite expectation. Applying this to high-frequency cryptocurrency ETF trades, we find evidence of infinite-mean durations for all five cryptocurrencies studied.
E 11
Efficient Portfolio Estimation in Large Risky Asset Universes
Leheng Chen (Hong Kong University of Science and Technology)
11:00 - 11:30 AM, 9 Jan., LT-16
Abstract This paper introduces CORE (COnstrained sparse Regression for Efficient portfolios), a novel method for constructing the efficient portfolios from an investment universe composed exclusively of risky assets. We develop two versions of CORE: one for investment universes that exclude long-short portfolios, and the other for universes that include them. We establish the asymptotic mean-variance efficiency of the CORE portfolio as both the number of assets and the sample size proportionally approach infinity. In extensive simulations and empirical studies on S\&P 500 Index constituents, the CORE portfolio achieves the target risk levels, delivers superior Sharpe ratios, and outperforms various benchmarks both before and after accounting for transaction costs.
E 12
Causal Reinforcement Learning: an Instrumental Variable Approach
Ye Luo (University of Hong Kong)
11:30 - 12:00 Noon, 9 Jan., LT-16
Abstract In the standard data analysis framework, data is first collected (once for all), and then data analysis is carried out. With the advancement of digital technology, decisionmakers constantly analyze past data and generate new data through the decisions they make. In this paper, we model this as a Markov decision process and show that the dynamic interaction between data generation and data analysis leads to a new type of bias—reinforcement bias—that exacerbates the endogeneity problem in standard data analysis. We propose a class of instrument variable (IV)-based reinforcement learning (RL) algorithms to correct for the bias and establish their asymptotic properties by incorporating them into a two-timescale stochastic approximation framework. A key contribution of the paper is the development of new techniques that allow for the analysis of the algorithms in general settings where noises feature time-dependency. Weuse the techniques to derive sharper results on finite-time trajectory stability bounds: with a polynomial rate, the entire future trajectory of the iterates from the algorithm fall within a ball that is centered at the true parameter and is shrinking at a (different) polynomial rate. We also use the technique to provide formulas for inferences that are rarely done for RL algorithms. These formulas highlight how the strength of the IV and the degree of the noise’s time dependency affect the inference
WARNING
The detailed schedule for Session 4 Econometrics is currently being finalized and will be updated here soon. Please check back later.