Please login first

List of accepted submissions

 
 
Show results per page
Find papers
 
  • Open access
  • 0 Reads
Integrating the implied regularity into implied volatility models: A study on free arbitrage model
,

The study "Integrating the Implied Regularity into Implied Volatility Models" investigates the relationship between implied volatility (IV) and the Hurst exponent (H), particularly in the context of option moneyness. The research introduces a novel IV model that integrates the concept of market memory, represented by H, to enhance the accuracy of volatility forecasting compared to traditional models such as SABR and its fractional extension, fSABR.

A key finding of the study is that when moneyness is equal to 1 (ATM), the Hurst exponent converges to 1/2, indicating that price movements follow a Brownian motion, which aligns with the Efficient Market Hypothesis (EMH). However, for options that are in-the-money (ITM) or out-of-the-money (OTM), H decreases, reflecting deviations from pure randomness and greater sensitivity of IV to market conditions. This observation suggests that market inefficiencies are more pronounced in extreme moneyness regions, making standard models less effective in capturing the dynamics of implied volatility.

To validate the model, the authors employ advanced optimization techniques, specifically Optuna, which leverages Bayesian optimization and the Tree-Structured Parzen Estimator (TPE) method. The model is calibrated against real-world market data, and its performance is compared with SABR and fSABR using error metrics such as mean squared error (MSE), mean absolute error (MAE), and curvature-based error measures. The empirical results show that the proposed model achieves lower errors and better fits the observed IV surface, especially in ITM and OTM regions where traditional models struggle.

Another significant contribution of the study is the verification that the proposed IV model satisfies the arbitrage-free conditions required for financial consistency. By ensuring that the model does not allow for riskless profit opportunities, the authors demonstrate its practical applicability in real-world trading and risk management.

  • Open access
  • 0 Reads
Identifying Market Dynamics Through the Hurst Exponent

This study investigates financial market dynamics by analyzing historical crude oil price data through the Hurst exponent, a measure of long-term dependence in time series. The objective is to assess the persistence and volatility structure of the market and to evaluate the predictive capability of the Hurst exponent in identifying distinct volatility regimes.

First, the study estimates realized volatility and the time-varying Hurst exponent to characterize market behavior. A K-means clustering algorithm is then employed to classify observations into three volatility regimes: low, moderate, and high. The predictive power of the Hurst exponent is assessed by training and evaluating multiple machine learning models, including logistic regression, random forests, XGBoost, LightGBM, and support vector machines (SVMs).

Empirical findings indicate that the Hurst exponent serves as a robust indicator of market turbulence. Among the models tested, LightGBM achieves the highest predictive performance, with an accuracy of 88%. Further optimization using Optuna's Bayesian hyperparameter tuning enhances the model’s performance, increasing accuracy to 91%. The model demonstrates high sensitivity in detecting high-volatility periods (recall = 1.00), although its precision in classifying these phases (0.80) suggests a degree of false-positive predictions.

To ensure the robustness of the results, a rolling-window validation strategy is implemented, preserving the temporal structure of the dataset. Additionally, isotonic regression is applied to refine the calibration of predicted probabilities, improving their reliability.

The findings of this study underscore the Hurst exponent’s effectiveness as a market efficiency and volatility indicator. By integrating statistical methods with machine learning techniques, this research provides a systematic framework for anticipating periods of financial instability, particularly in commodity markets such as crude oil. The proposed methodology offers valuable insights for risk management, portfolio allocation, and financial market forecasting.

  • Open access
  • 0 Reads
The behavior of European financial markets under risk pressure: calculating the Value-at-Risk of a stock portfolio using Python

The behavior of financial markets is characterized by frequent changes due to external factors such as government policies, economic events and various regulations. These factors can cause shifts in the means, variances, serial correlation and skewness of asset returns.

Modeling the dependency and volatility of financial returns has been a key issue in financial analysis, as it helps to quantify risk better. Analyzing historical market information can provide a framework for understanding risk and determining potential financial losses. The Value-at-Risk (VaR), recommended by the Basel II Accord, has become the most widely used risk measurement tool by analysts. The Value-at-Risk (VaR) enables financial institutions to measure, for a given level of probability, the largest expected loss of a portfolio during a particular period. One method for calculating the Value-at-Risk (VaR) is the variance–covariance approach, which looks at historical price movements and then uses probability theory to calculate the maximum loss within a specified confidence interval.

This paper aims to analyze the weekly returns of the financial indices of three countries, the United Kingdom (FTSE100), Germany (DAX30) and France (CAC40), over a period of 10 years, between September 2014 and September 2024. The first step of this analysis is to model the returns to account for various deviations from normality, such as skewness, excess kurtosis and autocorrelation. After modeling the data, the Value-at-Risk (VaR) is calculated using the variance–covariance approach. The analysis is carried out in Python, which, with powerful libraries and computational capabilities, proves to be the ideal tool. Finally, the empirical results show the Value-at-Risk (VaR) forecasts at the quantile levels of 0.95 and 0.99. This paper establishes that the delivery of this analysis as a modular API makes it suitable for wider use in risk management, as well as being highly extensible, contributing to better and more informed decisions.

  • Open access
  • 0 Reads
A Quantum Leap in Asset Pricing:
Explaining Anomalous Returns

We extend asset pricing studies by comparing the ability of multifactor models to explain large numbers of anomalous portfolio returns. Surprisingly, standard Fama and MacBeth (1973) cross-sectional regression tests show that a lesser known two-factor
model, dubbed the ZCAPM by Kolari, Liu, and Huang (2021), well outperforms prominent multifactor models in terms of explaining anomaly returns on an out-of-sample basis. In empirical tests, we utilize online databases of anomalies recently made available by researchers. Chen and Zimmerman (2022) provided an open source database with 161 long/short anomalies in the U.S. stock market. Also, Jensen, Kelly, and Pedersen (2023) furnished an online database containing 153 long/short anomalies in 93 countries, including the U.S. Based on 133 anomalies in the former study and 153 anomalies in the latter study with return series available from July 3, 1972 to December 31, 2021, we investigate a combined dataset off 286 anomalies. We find that, with the exception of the ZCAPM, prominent multifactor models do not explain anomalous portfolio returns. In contrast, the ZCAPM does a much better job of explaining them. In standard Fama and MacBeth (1973) cross-sectional regression tests, factor loadings for the ZCAPM are more significant than well-known multifactor models. Also, the goodness-of-fit , as estimated by R2 values, are much higher for the ZCAPM than other models. Further graphical tests compare the mispricing errors of different models with respect to anomalous portfolios. We find that the ZCAPM exhibits much lower mispricing errors than other models. We conclude that anomalous returns are anomalous for the most part with respect to prominent multifactor models but not the ZCAPM.
By implication, our evidence supports the efficient-market hypothesis of Fama (1970, 2013) rather than the behavioral hypothesis. As such, stock returns are closely related to systematic market risks.

  • Open access
  • 0 Reads
Developing a Multifaceted Central Bank Communication Dataset for Natural Language Processing-Driven Economic Analysis
, , ,

Central bank communication is a pivotal component in supporting economic and monetary policy in many countries. The efficacy of central bank communication affects market perception and the credibility of monetary policy, thus necessitating analytical tools to assess it. This study seeks to develop a dataset called CentralBankCorpus, the first multi-faceted dataset in Indonesia designed to comprehensively analyze monetary policy and central bank communication. This study employed a document analysis method with a labeling technique. It began by collecting official Bank Indonesia communication documents by means of transcription and scrapping. The collected data were further pre-processed and labeled with six linguistic tags. The dataset yields the CentralBankCorpus, comprising nearly half a million linguistically tagged tokens, spanning economic agent, topic, sentiment, transparency, key terms, and economic impact. This dataset will profoundly influence multiple facets. Academically, it will serve as the primary reference for NLP-focused research in economics, public policy, and organizational communication. Practically, it can assist Bank Indonesia in comprehending and addressing public perceptions of their policies, hence enhancing institutional accountability. This research ultimately endorses Bank Indonesia’s digital transformation through innovative application of NLP technology. Furthermore, it addresses a gap in the literature and contributes significantly to Indonesia’s economic development, while enhancing the nation’s role in the use of modern technology for policy communication at a broader level.

  • Open access
  • 0 Reads
Hybrid Machine Learning Models for Long-Term Stock Market Forecasting: Integrating Technical Indicators

Stock market forecasting is a critical area in financial research, yet the inherent volatility and non-linearity of financial markets pose significant challenges for traditional predictive models. This study proposes a hybrid deep learning model, integrating Long Short-Term Memory (LSTM) networks and Convolutional Neural Networks (CNNs) with technical indicators to enhance the predictive accuracy of stock price movements. The model is evaluated using the Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and R² score of the S&P 500 index over a 14-year period. The results indicate that the LSTM-CNN hybrid model achieves superior predictive performance compared to traditional models, including Support Vector Machines (SVMs), Random Forest (RF), and ARIMA, by effectively capturing both long-term trends and short-term fluctuations. While Random Forest demonstrated the highest raw accuracy with the lowest RMSE (0.0859) and highest R² (0.5655), it lacked sequential learning capabilities. The LSTM-CNN model, with an RMSE of 0.1012, MAE of 0.0800, MAPE of 10.22%, and R² score of 0.4199, proved to be highly competitive and robust in financial time-series forecasting. This study highlights the effectiveness of hybrid deep learning architectures in financial forecasting and suggests further enhancements through macroeconomic indicators, sentiment analysis, and reinforcement learning for dynamic market adaptation. It also improves risk-aware decision-making frameworks in volatile financial markets.

  • Open access
  • 0 Reads
TRADING EMOTIONS IN DAY TRADING: EXPERIMENTAL EVIDENCE ON THE SYNERGY BETWEEN HUMANS AND TRADING ROBOTS

Traders do not trade with markets but with their perceptions and beliefs. There is a growing recognition of behavioral biases' impact on financial decision-making. However, in day trading, a significant gap remains in the literature regarding the psychological and operational implications of trading robots on human traders, which this study seeks to fill. The aim was to investigate the extent to which trading robots can mitigate behavioral biases and provide a more rational market view. An experiment was conducted on the Brazilian stock exchange (B3) with 130 human traders and trading robots in real time, aiming to understand the relationship between traders' rational intention to buy or sell and their emotional capacity to execute these actions. This research introduces the concept of Psychological Distance and explores how trading robots bridge the gap between rational decision-making and emotional execution, reducing psychological barriers that compromise financial performance. The results demonstrate that trading robots can reduce behavioral biases such as greed and impulsiveness, promoting more rational decision-making and improved financial outcomes. Robots are particularly effective at eliminating profit-related biases, reducing human emotional interference in trading. However, their effectiveness is limited when it comes to mitigating loss aversion, highlighting the complexity of emotional responses, especially in managing financial losses. These insights reveal the potential and limits of automated trading systems in improving traders' performance. By reducing emotional biases and fostering disciplined decision-making, trading robots help traders optimize their strategies. This study advances behavioral finance and trading automation, supporting future innovations in trading psychology and algorithmic assistance.

  • Open access
  • 0 Reads
Leveraging Machine Learning Programming Algorithm for Predicting Credit Default among Nigerian Micro-borrowers

The high rate of credit default among micro-borrowers in developing economies highlights the limited predictive capacity of traditional risk assessment methods. This study, therefore, aims to predict credit or loan default among micro borrowers in Abeokuta town, Ogun State, Nigeria using STATA-based Least Absolute Shrinkage Selection Operator (LASSO) as a machine learning (ML) programming algorithm. A random sample of 384 microfinance customers was selected for the cross-sectional study employing a simple structured questionnaire as the data instrument. LASSO estimation in STATA 12.1 statistical software at a 5% significance level shows that macroeconomic indicators (inflation and state of economy) and socio-political factors (such as borrower’s income, paid employment status and security) play significant roles in predicting loan default among micro-borrowers. The results produced by the LASSO estimator have higher regression coefficients than traditional logistic regression and perform better. Therefore, the study affirms that the ML programming algorithm provides greater predictive capabilities of credit default among micro borrowers in the metropolitan city of Abeokuta, Nigeria. This finding implies that financial institutions in the study area when leveraging the importance of ML algorithms can be proactive in risk management and optimize their resources through efficient allocation of funds among borrowers. To this end, the study suggests that financial institutions in Nigeria especially microfinance banks should explore the application of ML algorithms for advanced predictive and analytical capabilities of complex patterns of teeming prospective borrowers’ information.

  • Open access
  • 0 Reads
On the time-varying causal relationships that drive bitcoin returns

In this paper, we use a Bayesian time-varying parameter vector autoregressive (TVP-VAR) model to assess the impact of alternative drivers of bitcoin returns. We consider an extended set of alternative drivers such as bitcoin volatility, investor sentiment indices, proxies for bitcoin supply and demand, stock market returns and volatility indices, commodities returns, exchange rates and interest rates. We select the most important variables using a Bayesian variable selection method. To examine the evolution of the Granger causality relationship between the selected variables and bitcoin returns over time, we develop and employ a new approach based on the estimates of the TVP-VAR model and heteroscedastic consistent Granger causality hypothesis testing. Our findings indicate that investor sentiment and ethereum returns affect bitcoin returns over the entire sampling period. Trading volume emerges as an important determinant of bitcoin returns when bitcoin prices remain relatively steady. In addition to the Granger causality, we perform impulse response function and forecast error variance decomposition analysis. The results from the structural analysis provide further evidence of the time-varying nature of bitcoin’s dynamics. In particular, we find that the effect of a structural shock (in terms of magnitude) on bitcoin returns depends on the time that the shock occurs.

  • Open access
  • 0 Reads
Who is leading in Communication Tone? Wavelet Analysis for the Fed and the ECB

In this paper, we examine the interdependence of the communication tones of the Fed and the ECB, arguably the two most influential central banks (CBs). The interdependence of monetary policies is expected as price fluctuations may spill over between countries or country blocks via trade links or other events such as global economic crises, pandemics, etc. We examine the relationship between the two CBs in the time and frequency domains, which we believe is crucial to scrutinizing the dynamics over the long time horizon, which is susceptible to transitions using wavelets that allow for short-, medium-, and long-term linkages to be revealed. Furthermore, we measure sentiments (tones) of the two CBs using two complementary methodologies in the analysis of economic and financial texts, i.e., a lexicon-based approach and FinBERT as a transformer-based approach. Our empirical findings suggest that the relationship between the two CBs is dynamic in the time and frequency dimensions, and there is no static leading role assigned to any of the CBs. Moreover, the lexicon-based and transformer-based algorithms are relatively similar in the medium run, which may suggest that alternative methodologies are also complementary. To the best of our knowledge, our study is the first to scrutinize the relationship between the communication tones of two CBs taking into account the dynamics in the time and frequency dimensions and using both lexicon-based and transformer-based methods.

Top