Please login first

List of accepted submissions

 
 
Show results per page
Find papers
 
  • Open access
  • 124 Reads
Health monitoring of civil structures: A MCMC approach based on a multi-fidelity deep neural network surrogate

To meet the need for reliable real-time monitoring of civil structures, safety control and optimization of maintenance operations, this paper presents a computational method for the stochastic estimation of the degradation of the load bearing structural properties. Exploiting a Bayesian framework, the procedure sequentially updates the posterior probability of the damage parameters used to describe the aforementioned degradation, conditioned on noisy sensors observations, by means of Markov chain Monte Carlo (MCMC) sampling algorithms. To enable the analysis to run in real-time or close to, the numerical model of the structure is replaced with a data-driven surrogate used to evaluate the conditional likelihood. The proposed surrogate model relies on a Multi-Fidelity (MF) Deep Neural Network (DNN), mapping the damage and operational parameters onto approximated sensor recordings. The MF-DNN is shown to effectively leverage information between multiple datasets, by learning the correlations across models with different fidelities without any prior assumption, ultimately alleviating the computational burden of the supervised training stage. The Low Fidelity (LF) responses are approximated by relying on proper orthogonal decomposition for the sake of dimensionality reduction, and a fully connected DNN. The high fidelity signals, that feed the MCMC within the outer-loop optimization, are instead generated by enriching the LF approximations through a deep long short-term memory network. Results relevant to a specific case study demonstrate the capability of the proposed procedure to estimate the distribution of damage parameters, and prove the effectiveness of the MF scheme in outperforming a single-fidelity based method.

  • Open access
  • 67 Reads
Assessment of the seismic bearing capacity of shallow strip footings over a void in heterogeneous soils: a Machine Learning-based approach

The estimation of the seismic bearing capacity of strip footing is of paramount importance in geotechnical engineering. In case of a shallow strip footing above voids in heterogeneous soil, the assessment of its said bearing capacity turns out to display a complex dependency on various parameters, linked to the geometry of the void and the properties of the soil. Recent research activities have highlighted that a methodology based on the combination of sensitivity analysis and machine learning can be extremely efficient in catching such a complex dependency. For the training of the ML technique, a database consisting of 38,000 Finite Element Limit Analysis (FELA) models has been adopted in this work. With the aim of estimating the mentioned seismic bearing capacity, five strategies have been investigated to select the training and test data. By considering the seismic bearing capacity as the single output parameter of the ML-based algorithm, and void depth and eccentricity, soil undrained shear strength and rate of change of its cohesion with the depth, and horizontal seismic acceleration as input parameters, the methodology has provided the most accurate results in mimicking the numerical, FELA-based reference solutions.

  • Open access
  • 89 Reads
Learning the link between architectural form and structural efficiency: a supervised machine learning approach

In this work, we exploit supervised machine learning (ML) to investigate the relationship between architectural form and structural efficiency under seismic excitations. We inspect a small dataset of simulated responses of tall buildings, differing in terms of base and top plans within which a vertical transformation method is adopted (tapered forms). A diagrid structure with members having a tubular cross-section is mapped on the architectural forms, and static loads equivalent to the seismic excitation are applied. Different ML algorithms, such as KNN, SVM, Decision Tree, Ensemble, Discriminant, Naïve Bayes are next trained, to classify the seismic response of each form on the basis of a specific label. Results to be presented rely upon the drift of the building at its top floor, though the same procedure can be generalized and adopt any performance characteristic of the considered structure, like e.g. the drift ratio, total mass, or expected design weight. The classification algorithms are all tested within a Bayesian optimization approach; it is then found that the Decision Tree classifier provides the highest accuracy, linked to the lowest computing time. This research activity put forward a promising perspective for the use of ML algorithms to help architectural and structural designers during the early stages of conception and control of tall buildings.

  • Open access
  • 87 Reads
Vectorial iterative schemes with memory for solving nonlinear systems of equations

There exist in the literature many iterative methods for solving nonlinear problems. Some of these methods can be transferred directly to the context of nonlinear systems, keeping the order of convergence, but others, cannot be directly extended to multidimensional case. Sometimes, the procedures are designed specifically for multidimensional problems by using different techniques, as composition and reduction or weight-function procedures, among others.

Our main aim is not only to design an iterative scheme for solving nonlinear systems, but also to assure its high-order of convergence by means of the introduction of matrix accelerating parameters. This is a challenging area of the numerical analysis where still there are few procedures defined.

Once the iterative method has been designed, it is necessary to carry out a dynamical study in order to verify the wideness of the basins of attraction of the roots and compare its stability with other known methods.

  • Open access
  • 124 Reads
An Image-based Algorithm for Automatic Detection of Loosened Bolts

The bolted joint has been widely used to connect load-bearing elements in aerospace, civil, and mechanical engineering systems. During its service life, particularly under external dynamical loads, a bolted joint may undergo self-loosening. Bolt-looseness causes a reduction in the load-bearing capacity and eventually leads to the failure of the bolted joint. This paper presents an automated image-based algorithm combining the Faster RCNN model with image processing for quick detection of loosened bolts in a structural connection. The algorithm is validated using a lab-scale bolted joint model for which various bolt-loosening events are simulated. The imagery data of the joint is captured and passed through the algorithm for bolt-looseness detection. The obtained results show that the loosened bolts in the joint were well detected and their loosening degrees were precisely quantified. Therefore, the image-based algorithm is promising for real-time structural health monitoring of realistic bolted joints.

  • Open access
  • 48 Reads
On the Compressive Power of Boolean Threshold Autoencoders
Published: 24 September 2021 by MDPI in The 1st Online Conference on Algorithms session Artificial Intelligence Algorithms

Autoencoders have been extensively studied and applied in recent studies on neural networks. An autoencoder is a layered neural network that consists of an encoder and a decoder, where the former compresses an input vector to a lower dimensional vector, and the latter transforms the low-dimensional vector back to the original input vector exactly or approximately. We study the compressive power of autoencoders using the Boolean threshold network model (i.e., multi-layer perceptron with linear threshold activation functions) by analyzing the numbers of nodes and layers that are required to ensure that each vector in a given set of distinct input binary vectors is transformed back exactly. We show that for any set of $n$ distinct vectors there exists a seven-layer autoencoder with the optimal compression ratio (i.e., logarithmic size), but that there exists a set of vectors for which there is no three-layer autoencoder who has a middle layer of logarithmic size. We also study the numbers of nodes and layers required only for encoding, the results of which suggest that the decoding part is the bottleneck of autoencoding.

This talk is based on joint work with A. A. Melkman, S. Guo, W-K. Ching, and P. Liu.

  • Open access
  • 55 Reads
A polynomial-time approximation to a minimum dominating set in a graph

Finding a dominating set with the minimum cardinality in a graph $G=(V,E)$ (a subset of vertices $S\subseteq V$ such that every vertex $v\in V\setminus S$ has at least one neighbor in set $S$) is known to be NP-hard. A polynomial-time approximation algorithm for this problem, described here, works in two stages. At the first stage a dominant set is generated by a greedy algorithm, and at the second stage this dominating set is purified (reduced). The reduction is achieved by the analysis of the flowchart of the algorithm of the first stage and a special kind of clustering of the dominating set generated at the first stage. The clustering of the dominating set naturally leads to a special kind of a spanning forest of graph $G$, which serves as a basis for the second purification stage. We expose some types of graphs for which the algorithm of the first stage already delivers an optimal solution and derive sufficient conditions when the overall algorithm constructs an optimal solution. We give three alternative approximation ratios for our algorithm of the first stage, two of which are expressed in terms of solely invariant problem instance parameters. The greedy algorithm of the first stage has essentially the same properties as the earlier known state-of-the-art algorithms for the problem. The second purification stage results in an essential improvement of the quality of the dominant set created at the first stage. We have measured the practical behavior of the algorithm of both stages on randomly generated problem instances. We have used two different random methods to generate our graphs, each of them yielding graphs with different structures. For the first class of instances the greedy algorithm of stage 1 already gave quite good solutions, hence the reduction at stage 2 was not so significant. For the second class of instances, the algorithm of stage 1 has delivered the solutions of a poor quality, however, the algorithm of stage 2 turned out to be very efficient: applied to the graphs of the second class, it has significantly reduced the size of the solutions delivered by stage 1 creating optimal or close to optimal solutions.

  • Open access
  • 92 Reads
Unscented Kalman Filter empowered by Bayesian model evidence for system identification in structural dynamics

System identification is often limited to parameter identification, while model uncertainties are disregarded or accounted for by a fictitious process noise. However, modelling assumptions may have a large impact on system identification and can lead to bias or even divergence of the estimates if they cannot catch correctly the real system behaviour. Indeed, the adoption of either excessively simplified or too complex models may have a detrimental effect on tracking of the system state: oversimplified models may underestimate the effect of a physical process taking place; complex models may lead to good data fitting but, possibly, to poor predictions. For this reason, we propose an Unscented Kalman Filter (UKF) empowered with online Bayesian model evidence computation. This approach employs more than one model to track the state of the system and associates to each model a plausibility measure, updated whenever new measurements are exploited. In this way, the filter outcomes obtained for different models are put in comparison and a quantitative confidence value is associated to each of them. While the coupling of Extended Kalman Filter (EKF) and Bayesian model evidence was already addressed, it still lacks robustness in case of severe nonlinearities in system response to the external stimuli; we therefore modified the approach to exploit the most striking features of the UKF, namely the ease of implementation (as it does not require the computation of model Jacobian) and the higher-order accuracy in the description of the evolution of the state statistics. A few challenging identification problems related to structural dynamics are discussed, to show the effectiveness of the proposed methodology.

  • Open access
  • 67 Reads
A Bi-criteria Model for Saving a Path Minimizing the Time Horizon of a Dynamic Contraflow

The quickest contraflow in a single-source-single-sink network is a dynamic flow that minimizes the time horizon of a given flow value at the source to be sent to the sink allowing arc reversals. Because of the arc reversals, for a sufficiently large value of the flow, the residual capacity of all or most of the paths towards the source, from a given node, may be zero or reduced significantly. In some cases, e.g., for the movement of facilities to support the evacuation in an emergency, it is imperative to save a path from a given node towards the source. We formulate such a problem as a bi-criteria optimization problem, in which one objective minimizes the length of the path to be saved from a specific node towards the source, and the other minimizes the quickest time of the flow from the source towards the sink allowing arc reversals. We propose an algorithm based on the epsilon-constraint approach to find non-dominated solutions.

  • Open access
  • 96 Reads

A Fast Algorithm for Euclidean Bounded Single-Depot Multiple Traveling Salesman Problem

The Multiple Traveling Salesman Problem (MTSP) is a combinatorial optimization problem that can model some real-life problems. There are given $n+1$ objects that are commonly referred to as cities, among which there is one distinct city called depot, and $k$ additional objects commonly referred to as salesman. Each salesman has to build its own tour that starts from the depot, ends also in depot and visits only once one or more other cities. Visiting city $j$ from city $i$ implies a cost $c_{ij}$. The cost of a tour is the sum of the individual costs of each pair of cities from that tour. The aim is to minimize the total cost of all $k$ tours. Here we consider the two-dimensional Euclidean version of the problem and impose lower and upper bounds on the minimum and maximum number of cities in a tour suggesting a 3-phase heuristic algorithm for that version. At the first phase the whole set of cities is partitioned into $k$ disjoint subsets, at the second phase a feasible tour for each of these subsets is constructed, and at phase 3 these feasible tours are iteratively improved. We report preliminary experimental results for the 22 benchmark instances. The approximation gap provided by the proposed heuristic is comparable to the state of the art results, whereas it is much faster than earlier known state-of-the-art algorithms.

1 2 3 4
Top