Please login first

List of accepted submissions

 
 
Show results per page
Find papers
 
  • Open access
  • 0 Reads
Investigating Corporate Governance Impact on Financial Risk Management: Insights from the Albanian Banking Industry

Corporate governance is essential for mitigating financial risk and enhancing the resilience of financial services firms. The 2008 financial crisis underlined the critical role of corporate governance in ensuring financial system stability. Following the crisis, policymakers and international standard-setting organizations urged for more severe governance structures to keep banks from taking on too much risk and prevent another systemic collapse. This study examines the influence of corporate governance and financial factors on credit risk within the Albanian banking industry. Using data from 12 commercial banks over the period 2012–2022, the study uses regression analysis with EViews software to study how board independence, ownership concentration, executive compensation, bank size, and capital adequacy impact credit risk. The outcomes indicate that higher board independence, executive compensation, and a larger bank size are linked with lower credit risk, while ownership concentration is directly related to credit risk. The findings suggest that improving corporate governance practices, specifically increasing board independence and lining up executive compensation with performance, can lower credit risk in the banking sector. Furthermore, authorities should monitor the impact of ownership concentration, as powerful shareholders may engage in riskier practices. This study provides valuable insights for policymakers and banking regulators in developing the stability of the Albanian financial system.

  • Open access
  • 0 Reads
Decoding ESG's Impact on Conditional Beta: Insights from Eurostoxx 600
, ,

This study explores the relationship between environmental, social, and governance (ESG) factors and systemic risk, measured through conditional beta, in a sample of Eurostoxx 600 firms from 2015 to 2021. Using machine learning models, including Random Forest and XGBoost, we examine how ESG dimensions interact with the financial performance across six major super-sectors. Our findings reveal significant sectoral heterogeneity. Environmental investments increase the short-term risk in Industrial and Consumer Discretionary sectors due to the compliance costs and market volatility, while governance plays a key role in Energy and Utilities, where regulatory requirements can heighten the systematic risk. In contrast, ESG factors have a lower impact in the Financial and Real Estate sectors compared to that in the other super-sectors, where existing regulations may mitigate additional ESG-driven risk, though governance remains the most influential pillar. In the Healthcare sector, environmental initiatives appear to reduce risk by strengthening reputational capital and investor confidence, while social and governance factors increase short-term volatility. These insights suggest that ESG does not function as a universal risk mitigator but reshapes the risk exposure depending on the industry dynamics and regulatory constraints. Our results provide guidance for investors and policymakers in integrating ESG considerations into financial risk models and highlight the need for sector-specific ESG strategies to enhance the accuracy of risk assessments.

  • Open access
  • 0 Reads
“Learning from your neighbours”: prudential provisions of the EU AI Act for the UK insurance supervisory regime.

This paper focuses on the prudential regulation and supervision of UK re-insurance undertakings in relation to Artificial Intelligence (AI) considerations. Specifically, it presents a critical analysis of the prudential provisions of the EU AI Act, which could be adjusted and adopted in the UK regulatory and supervisory regime, in line with the Prudential Regulation Authority (PRA)’s approach to insurance supervision. Building on the gaps identified regarding the supervisory approach to AI applications within the insurance value chain, it presents proposed developments based on the EU AI Act. The purpose of this paper is to present a critique on the learnings from the EU AI Act for UK financial regulators regarding the prudential supervision of re-insurers. Beyond the EU AI Act, the principles from the IAIS are also discussed to complement the recommendations for UK regulators. The contribution of this paper is in providing advances to the UK’s approach to (a) regulate and (b) supervise AI applications within insurance. In relation to prudential supervision of AI applications and uses within the insurance value chain, a principles-based vs. a rules-based approach is examined, commenting on their cross-comparison. Regulating and supervising AI applications within the UK insurance industry is of high importance, linked to AI uses and the inherent purpose of insurance. In particular, referring to the growth and capacity of the insurance market, with increasingapplications of AI and the insurability of risks, bridging the insurance protection gap and making insurance more affordable via increased accuracy of risks and improved underwriting, both core practices of prudential nature, are important. Overall, this research adds to the growing literature about the regulatory implications regarding AI, using the UK insurance industry as a case study, by commenting on the EU vs. UK regulatory regime and supervisory approach from a prudential lens.

  • Open access
  • 0 Reads
MACHINE LEARNING FOR NON-PERFORMING LOAN PREDICTION: ENHANCING CREDIT RISK MANAGEMENT
, ,

Non-performing loans (NPLs) hurt financial institutions by raising risks, decreasing cash flow, and lowering capital, making it critical for lenders to evaluate credit risk appropriately. With the increasing complexity of credit risk assessment, machine learning algorithms have become essential for the early detection and mitigation of non-performing loans (NPLs), allowing financial institutions to make better decisions to lower credit risk by properly forecasting NPLs. Compared with traditional statistical models, machine learning algorithms are better at predicting default probabilities and identifying patterns. In order to predict non-performing loans (NPLs), this study examines the efficacy of seven machine learning algorithms: Random Forest, Decision Tree, Lasso Regression, Support Vector Machine (SVM), Bidirectional Long Short-Term Memory (BiLSTM), Light Gradient Boosting Machine (LightGBM), and Extreme Gradient Boosting (XGBoost). The analysis is conducted using a dataset from the DSE-listed commercial banks of Bangladesh, covering the period from 2013 to 2023. Various performance matrices, such as the mean absolute error (MAE), mean square error (MSE), root mean squared error (RMSE), and mean absolute percentage error (MAPE), are used to train and assess the accuracy of the models. The empirical results show that while BiLSTM shows promise in capturing temporal relationships in loan performance, ensemble learning models—specifically, XGBoost and LightGBM—display stronger predictive competencies when compared to conventional tree-based classifiers. By providing insightful information for researchers, banking institutions, and legislators on how to improve risk assessment frameworks, this comparative analysis adds to the expanding reservoir of work on machine learning applications in financial risk management.

  • Open access
  • 0 Reads
Predicting Market Reactions to News: An LLM-Based Approach Using Spanish Business Articles

In financial markets, news significantly impacts stock prices. Despite the widely postulated "Efficient Market Hypothesis," empirical evidence consistently reveals market inefficiencies, particularly when processing complex textual information. Previous research addressing these inefficiencies has predominantly employed dictionary-based methods, sentiment analysis, topic modeling, and more recently, vector-based models such as BERT. However, these approaches often lack a comprehensive understanding of textual nuances and typically neglect firm-specific economic shocks, relying excessively on headlines rather than full-text analysis. This paper addresses these limitations by leveraging Large Language Models (LLMs) to provide a comprehensive, firm-specific analysis of complete news articles. Using a dataset of Spanish business news from DowJones Newswires during a period of heightened uncertainty (June 2020 to September 2021), we apply LLMs guided by a structured news-parsing schema. This schema systematically identifies firms affected by news articles and classifies the implied economic shocks by type, magnitude, and direction. Our findings demonstrate that traditional vector embedding-based clustering methods (e.g., KMeans) yield unstable article distributions over sequential data splits, resulting in short-lived trading signals and negligible out-of-sample profitability. In contrast, the LLM-based methodology produces stable and economically meaningful clusters, generating robust and persistent trading signals. The resulting trading strategy effectively identifies winners and losers, consistently anticipating market trends by comprehending the economic implications of firm-specific shocks. Moreover, the profitability of this approach remains robust across various hyperparameter choices, including holding period lengths and the number of selected clusters. Overall, our results highlight the superiority of LLM-based analysis in capturing nuanced, economically relevant information from financial narratives, offering a promising avenue for predicting market reactions to firm-specific news during volatile periods.

  • Open access
  • 0 Reads
Pair Trading a Sparse Synthetic Control

Financial markets frequently exhibit transient price divergences between economically linked assets, yet traditional pair trading strategies struggle to adapt to structural breaks and complex dependencies, limiting their robustness in dynamic regimes. This paper addresses these challenges by developing a novel framework that integrates sparse synthetic control with copula-based dependence modeling to enhance adaptability and risk management. Economically, our approach responds to the need for strategies that systematically identify latent linkages while mitigating overfitting in high-dimensional asset pools. The sparse synthetic control methodology constructs a parsimonious synthetic asset via an ℓ1-regularized least squares optimization, automatically selecting a sparse subset of influential assets from a broad donor pool while maintaining interpretability and computational efficiency. Empirical application to S&P 500 constituents demonstrates that relatively few donor assets (27 in our case) suffice to create effective synthetic controls. By embedding this within a copula-based dependence framework, we capture non-linear and tail dependencies between target and synthetic assets. Our analysis reveals that elliptical copulas, particularly the Student's t specification, provide the best fit for modeling return dependencies, highlighting the importance of accommodating tail dependence in pair trading strategies. Trading signals, grounded in the relative mispricing between these assets, employ a cumulative index that resets after position closures to isolate episodic opportunities, with disciplined entry rules requiring concurrent misalignment signals to filter noise. The empirical results demonstrate the superior performance of our integrated approach across diverse market conditions. The best-performing copula specification, N14, achieves an annualized return of 17.26% and a Sharpe ratio of 3.97, with moderate volatility (4.35%). Notably, all tested copula specifications deliver positive risk-adjusted returns, underscoring the robustness of our framework. Future research directions include exploring time-varying copulas, extending the framework to multiple target assets, and incorporating transaction costs for practical implementation.

  • Open access
  • 0 Reads
Interdependencies between AI and Big Data Stocks and Tokens in Extreme Market Conditions : Implications for Portfolio Optimization Strategies
, ,

Abstract : In the rapidly evolving landscape of artificial intelligence (AI) and big data, understanding the financial dynamics between tokens and stocks within these sectors has become increasingly essential for investors and policymakers. This study investigates the connectivity between these asset classes using Quantile-based Vector Autoregression (QVAR) models with "Extended Joint" connectivity and frequency-domain analysis. Covering the period from December 17, 2020 to February 11, 2025, our research reveals significant interdependencies, particularly during periods of heightened market volatility. The findings indicate that major technology stocks, such as those of Microsoft (MSFT) and Amazon (AMZN), act as dominant shock transmitters, exerting substantial influence over other assets. Conversely, AI-focused tokens, including SingularityNET (AGIX) and Numeraire (NMR), primarily serve as shock receivers, responding to, rather than driving, market fluctuations. From a portfolio optimization perspective, AGIX stands out as the most effective hedging asset, followed by NMR and CTXC. These results provide critical insights for investors aiming to enhance diversification strategies and manage risk in AI and big data-related markets. Furthermore, this study contributes to a broader understanding of asset interactions within the digital economy, shedding light on the growing financialization of AI-driven innovations. The findings offer practical implications for institutional and retail investors seeking to navigate this rapidly evolving sector.

  • Open access
  • 0 Reads
Detecting fraudulent transactions using artificial intelligence algorithms
,

Accounting systems are integral to managing financial transactions, and they generate substantial volumes of data as a result. This vast amount of data can create environments conducive to intentional fraudulent activities, particularly in high-dimensional settings where the complexity and volume of information can obscure irregularities. To combat this issue, various methods have been developed to estimate and detect fraudulent transactions within accounting systems. These methods differ widely in their audit processes, scopes, and applications, reflecting the diverse challenges faced in financial oversight.

In recent years, data mining techniques have gained prominence as effective tools for detecting fraud. Their utility stems from the ability to handle large datasets while maintaining a comprehensive audit scope, which is essential for identifying potential anomalies. This study investigated the effectiveness of two specific data mining approaches: artificial neural network and Random Forest methods. Utilizing a dataset comprising 10,000 entries, this study aimed to evaluate how well these methods could detect fraudulent transactions.

The analysis of the test dataset yielded impressive results, with the artificial neural network method achieving an accuracy rate of 90%. Meanwhile, the Random Forest method outperformed it, achieving an accuracy rate of 96.30% in identifying risks associated with fraud or errors. These findings underscore the potential of advanced data mining techniques in enhancing the integrity of accounting systems and improving fraud detection capabilities.

  • Open access
  • 0 Reads
The Role of Artificial Intelligence as a driving variable in the modern market: A MICMAC approach

Purpose:

This research aims to recognise the leading financial technologies presently accessible in the market. Technological advancements such as the Internet of Things, cloud computing, augmented reality, big data, blockchain, smart space, 5G networks, automotive robotic processes, and artificial intelligence have revolutionised traditional business methods. Artificial intelligence (AI) represents a recent instance of an emerging technology with significant potential to impact the marketing field. Marketers worldwide are frantically searching for artificial intelligence (AI) solutions that will allow them to carry out their duties effectively. Conducting an in-depth examination of the pertinent literature can underscore the significance of artificial intelligence (AI) in marketing and suggest prospective avenues for future investigation.

Design/Methodology/Approach:

By utilising the Interpretive Structural Modelling (ISM) technique along with the SmartISM software, one can determine the interrelationships that exist between the variables. MICMAC analysis is used for the purpose of additional variable classification.

Findings:

Artificial intelligence is the independent variable that propels the other variables in the final model. Blockchain, big data, and the Internet of Things have surfaced as significant driving factors. This study's dependent variables comprise augmented reality, automotive, cloud computing, 5G networks, and robotic processes. Smart space, on the other hand, is considered a linkage variable.

Implications:

It is imperative for marketers to comprehend the significance of artificial intelligence, which is a pivotal technology in the present-day landscape and is catalysing diverse technological progressions. It is recommended that marketers incorporate technology into their daily business operations to enhance efficiency in the highly competitive contemporary landscape.

Originality Value:

This study is among the significant ones that underscore the importance of technological progress in the modern marketplace and its impact on the field of marketing. Furthermore, the present study has presented a unique framework that underscores the significance of artificial intelligence.

  • Open access
  • 0 Reads
“PREDICTING EDUCATION LOAN REPAYMENT: A SEM-ANN INTEGRATIVE MODELING APPROACH”

Education loans complement human capital development, and with successful recovery, they become self-sustainable. This recovery can be enhanced if defaults can be predicted accurately, which would also optimize the capital reserve requirements. Hence, this study aims to evaluate the attitudinal factors in educational loan repayment by integrating willingness with the ability of the borrower. This study follows a multi-method approach to testing the antecedent attitudinal variables of education loan repayment intention. A search of the literature finds themes for framing the hypothesis, which is tested quantitatively using partial least squares–structural equation modeling (PLS-SEM), and the prediction accuracy is calculated using artificial neural network (ANN) and deep neural network (DNN) modeling in multiple stages. Credit reporting and perceived quality of life were the two most significant variables in the PLS-SEM model integrated with the ANN model in multiple stages that resulted in increased prediction accuracy at each stage. The prediction accuracy of the ANN model before the integration of SEM was 87%, and after the final-stage SEM-ANN integration, it increased to 90%, while it increased from 89% to 93% after single-stage deep learning (DL) integration. Therefore, multi-stage SEM-ANN-DL integration improves the prediction accuracy for defaults. Improvements in the prediction accuracy can help financial institutions to plan their loan recovery and calculate the optimum capital reserve requirements for provisioning for non-performing assets.

Top