Publications

Up to date information can be found on my Google Scholar and ArXiv profiles.

Click on the publications below to expand them.

Submitted

M. Pacini, X. Dong, B. Lepri, G. Santin, Separation Power of Equivariant Neural Networks (2024). Preprint
The separation power of a machine learning model refers to its capacity to distinguish distinct inputs, and it is often employed as a proxy for its expressivity. In this paper, we propose a theoretical framework to investigate the separation power of equivariant neural networks with point-wise activations. Using the proposed framework, we can derive an explicit description of inputs indistinguishable by a family of neural networks with given architecture, demonstrating that it remains unaffected by the choice of non-polynomial activation function employed. We are able to understand the role played by activation functions in separability. Indeed, we show that all non-polynomial activations, such as ReLU and sigmoid, are equivalent in terms of expressivity, and that they reach maximum discrimination capacity. We demonstrate how assessing the separation power of an equivariant neural network can be simplified to evaluating the separation power of minimal representations. We conclude by illustrating how these minimal components form a hierarchy in separation power.
  
@misc{Pacini2024b,
      title={Separation Power of Equivariant Neural Networks}, 
      author={Marco Pacini and Xiaowen Dong and Bruno Lepri and Gabriele Santin},
      year={2024},
      eprint={2406.08966},
}
  

T. Wenzel, D. Winkle, G. Santin, B. Haasdonk, Adaptive meshfree solution of linear partial differential equations with PDE-greedy kernel methods (2022). Preprint
We consider the meshless solution of PDEs via symmetric kernel collocation by using greedy kernel methods. In this way we avoid the need for mesh generation, which can be challenging for non-standard domains or manifolds. We introduce and discuss different kind of greedy selection criteria, such as the PDE-P -greedy and the PDE-f -greedy for collocation point selection. Subsequently we analyze the convergence rates of these algorithms and provide bounds on the approximation error in terms of the number of greedily selected points. Especially we prove that target-data dependent algorithms, i.e. those using knowledge of the right hand side functions of the PDE, exhibit faster convergence rates. The provided analysis is applicable to PDEs both on domains and manifolds. This fact and the advantages of target-data dependent algorithms are highlighted by numerical examples.
  
@Article{Wenzel2022c,
  author    = {Wenzel, Tizian and Santin, Gabriele and Haasdonk, Bernard},
  title     = {Analysis of Target Data-Dependent Greedy Kernel Algorithms: Convergence Rates for f-, {\$}{\$}f {\backslash}cdot P{\$}{\$}- and f/P-Greedy},
  journal   = {Constructive Approximation},
  year      = {2022},
  month     = {Oct},
  issn      = {1432-0940},
  doi       = {10.1007/s00365-022-09592-3},
  url       = {https://doi.org/10.1007/s00365-022-09592-3},
  day       = {18},
}
  

T. Wenzel, G. Santin, B. Haasdonk, Universality and optimality of structured deep kernel networks (2021). Preprint
Kernel based methods yield approximation models that are flexible, efficient and powerful. In particular, they utilize fixed feature maps of the data, being often associated to strong analytical results that prove their accuracy. On the other hand, the recent success of machine learning methods has been driven by deep neural networks (NNs). They achieve a significant accuracy on very high-dimensional data, in that they are able to learn also efficient data representations or data-based feature maps. In this paper, we leverage a recent deep kernel representer theorem to connect the two approaches and understand their interplay. In particular, we show that the use of special types of kernels yield models reminiscent of neural networks that are founded in the same theoretical framework of classical kernel methods, while enjoying many computational properties of deep neural networks. Especially the introduced Structured Deep Kernel Networks (SDKNs) can be viewed as neural networks with optimizable activation functions obeying a representer theorem. Analytic properties show their universal approximation properties in different asymptotic regimes of unbounded number of centers, width and depth. Especially in the case of unbounded depth, the constructions is asymptotically better than corresponding constructions for ReLU neural networks, which is made possible by the flexibility of kernel approximation.
  
@misc{Wenzel2021c,
      title={Universality and Optimality of Structured Deep Kernel Networks}, 
      author={Tizian Wenzel and Gabriele Santin and Bernard Haasdonk},
      year={2021},
      eprint={2105.07228},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}
  

Accepted

M. Pacini, X. Dong, B. Lepri, G. Santin, Point-wise Activations and Steerable Convolutional Network, Accepted for publication in The Second Learning on Graphs Conference (2023). Preprint
Steerable Convolutional Neural Networks are a popular and efficient class of equivariant models. For some specific groups, representations, and choice of coordinates, the most common point-wise activations, such as ReLU, are not equivariant. Hence they cannot be employed in designing equivariant neural networks. In this paper, we present a simple yet effective generalization of such results for equivariant networks. First, we prove that for groups such point-wise activations can be employed in disentangled layers only when a simple group-theoretic condition is satisfied, namely when the linear representations underlying their feature spaces are trivial representations. Second, we show analogous results for connected compact groups, where the only admitted equivariant neural networks with point-wise activations are the invariant ones.
  
@InProceedings{Pacini2023,
  author    = {Marco Pacini and Xiaowen Dong and Bruno Lepri and Gabriele Santin},
  title     = {Point-wise Activations and Steerable Convolutional Networks},
  booktitle = {The Second Learning on Graphs Conference},
  year      = {2023},
  url       = {https://openreview.net/forum?id=gsJPYzdA0S},
}
  

Published

2024

A. Longa, S. Azzolin, G. Santin, G. Cencetti, P. Lio’, B. Lepri, A. Passerini, Explaining the Explainers in Graph Neural Networks: a Comparative Study , ACM Computing Surveys (2024). Preprint Published
Following a fast initial breakthrough in graph based learning, Graph Neural Networks (GNNs) have reached a widespread application in many science and engineering fields, prompting the need for methods to understand their decision process. GNN explainers have started to emerge in recent years, with a multitude of methods both novel or adapted from other domains. To sort out this plethora of alternative approaches, several studies have benchmarked the performance of different explainers in terms of various explainability metrics. However, these earlier works make no attempts at providing insights into why different GNN architectures are more or less explainable, or which explainer should be preferred in a given setting. In this survey, we fill these gaps by devising a systematic experimental study, which tests ten explainers on eight representative architectures trained on six carefully designed graph and node classification datasets. With our results we provide key insights on the choice and applicability of GNN explainers, we isolate key components that make them usable and successful and provide recommendations on how to avoid common interpretation pitfalls. We conclude by highlighting open questions and directions of possible future research.
  
@Article{Longa2024,
  author    = {Longa, Antonio and Azzolin, Steve and Santin, Gabriele and Cencetti, Giulia and Lio, Pietro and Lepri, Bruno and Passerini, Andrea},
  title     = {Explaining the Explainers in Graph Neural Networks: a Comparative Study},
  journal   = {ACM Comput. Surv.},
  year      = {2024},
  month     = {Sep},
  issn      = {0360-0300},
  doi       = {10.1145/3696444},
  url       = {https://doi.org/10.1145/3696444},
  address   = {New York, NY, USA},
  publisher = {Association for Computing Machinery},
}
  

G. Santin, T. Wenzel, B. Haasdonk, On the optimality of target-data-dependent kernel greedy interpolation in Sobolev Reproducing Kernel Hilbert Spaces, SIAM Journal on Numerical Analysis (2024). Preprint Published
Kernel interpolation is a versatile tool for the approximation of functions from data, and it can be proven to have some optimality properties when used with kernels related to certain Sobolev spaces. In the context of interpolation, the selection of optimal function sampling locations is a central problem, both from a practical perspective, and as an interesting theoretical question. Greedy interpolation algorithms provide a viable solution for this task, being efficient to run and provably accurate in their approximation. In this paper we close a gap that is present in the convergence theory for these algorithms by employing a recent result on general greedy algorithms. This modification leads to new convergence rates which match the optimal ones when restricted to the P-greedy target-data-independent selection rule, and can additionally be proven to be optimal when they fully exploit adaptivity (f-greedy). Other than closing this gap, the new results have some significance in the broader setting of the optimality of general approximation algorithms in Reproducing Kernel Hilbert Spaces, as they allow us to compare adaptive interpolation with non-adaptive best nonlinear approximation.
  
@Article{Santin2024a,
  author    = {Santin, Gabriele and Wenzel, Tizian and Haasdonk, Bernard},
  title     = {On the Optimality of Target-Data-Dependent Kernel Greedy Interpolation in Sobolev Reproducing Kernel Hilbert Spaces},
  year      = {2024},
  doi       = {10.1137/23M1587956},
  url       = {https://epubs.siam.org/doi/abs/10.1137/23M1587956},
  journal   = {SIAM Journal on Numerical Analysis},
  number    = {5},
  pages     = {2249-2275},
  volume    = {62},
}
  

T. Wenzel, G. Santin, B. Haasdonk, Stability of convergence rates: Kernel interpolation on non-Lipschitz domains , IMA Journal of Numerical Analysis (2024). Preprint Published
Error estimates for kernel interpolation in Reproducing Kernel Hilbert Spaces (RKHS) usually assume quite restrictive properties on the shape of the domain, especially in the case of infinitely smooth kernels like the popular Gaussian kernel. In this paper we leverage an analysis of greedy kernel algorithms to prove that it is possible to obtain convergence results (in the number of interpolation points) for kernel interpolation for arbitrary domains $\Omega\subset\mathbb{R}^d$, thus allowing for non-Lipschitz domains including e.g. cusps and irregular boundaries. Especially we show that, when going to a smaller domain $\tilde\Omega\subset\Omega\subset\mathbb{R}^d$, the convergence rate does not deteriorate - i.e. the convergence rates are stable with respect to going to a subset. The impact of this result is explained on the examples of kernels of finite as well as infinite smoothness like the Gaussian kernel. A comparison to approximation in Sobolev spaces is drawn, where the shape of the domain Ω has an impact on the approximation properties. Numerical experiments illustrate and confirm the experiments
  
@Article{Wenzel2024a,
  author    = {Wenzel, Tizian and Santin, Gabriele and Haasdonk, Bernard},
  title     = ,
  journal   = {IMA Journal of Numerical Analysis},
  year      = {2024},
  month     = {05},
  pages     = {drae014},
  issn      = {0272-4979},
  doi       = {10.1093/imanum/drae014},
  url       = {https://doi.org/10.1093/imanum/drae014},
}
  

L. Ferrarotti, M. Luca, G. Santin, G. Previati, G. Mastinu, E. Campi, L. Uccello, A. Albanese, P. Zalaya, A. Roccasalva, B. Lepri, Autonomous and Human-Driven Vehicles Interacting in a Roundabout: A Quantitative and Qualitative Evaluation, IEEE Access (2024). Preprint Published
Optimizing traffic dynamics in an evolving transportation landscape is crucial, particularly in scenarios where autonomous vehicles (AVs) with varying levels of autonomy coexist with human-driven cars. This paper presents a novel approach to optimizing choices of AVs using Proximal Policy Optimization (PPO), a reinforcement learning algorithm. We learned a policy to minimize traffic jams (i.e., minimize the time to cross the scenario) and to minimize pollution in a roundabout in Milan, Italy. Through empirical analysis, we demonstrate that our approach can reduce time and pollution levels. Furthermore, we qualitatively evaluate the learned policy using a cutting-edge cockpit to assess its performance in near-real-world conditions. To gauge the practicality and acceptability of the policy, we conducted evaluations with human participants using the simulator, focusing on a range of metrics like traffic smoothness and safety perception. In general, our findings show that human-driven vehicles benefit from optimizing AVs dynamics. Also, participants in the study highlighted that the scenario with 80\% AVs is perceived as safer than the scenario with 20\%. The same result is obtained for traffic smoothness perception.
  
@Article{Ferrarotti2024,
  author    = {Ferrarotti, Laura and Luca, Massimiliano and Santin, Gabriele and Previati, Giorgio and Mastinu, Gianpiero and Gobbi, Massimiliano and Campi, Elena and Uccello, Lorenzo and Albanese, Antonino and Zalaya, Praveen and Roccasalva, Alessandro and Lepri, Bruno},
  title     = {Autonomous and Human-Driven Vehicles Interacting in a Roundabout: A Quantitative and Qualitative Evaluation},
  journal   = {IEEE Access},
  year      = {2024},
  pages     = {1-1},
  doi       = {10.1109/ACCESS.2024.3370469},
}
  

M. Hammer, T. Wenzel, G. Santin, L. Meszaros-Beller, J. Paige Little, B. Haasdonk, S. Schmitt, A new method to design energy-conserving surrogate models for the coupled, non-linear responses of intervertebral discs, Biomechanics and Modeling in Mechanobiology (2024). Preprint Published
The aim of this study was to design physics-preserving and precise surrogate models of the non-linear elastic behaviour of an intervertebral disc (IVD). Based on artificial force-displacement data sets from detailed finite element (FE) disc models, weused greedy kernel and polynomial approximations of second, third and fourth order to train surrogate models for the scalar force-torque-potential. Doing so, the resulting models of the elastic IVD responses ensured the conservation of mechanical energy through their structure. At the sametime, they were capable of predicting disc forces for the full physiological range of motion andfor the coupling of all six degrees of freedom of an intervertebral joint. The performance of allsurrogate models for a subject-specific L4|5 disc geometry was evaluated both on training and test data obtained from uncoupled (one-dimensional), weakly coupled (two-dimensional),and random movement trajectories in the entire six-dimensional (6d) physiological displacement range, as well as on synthetic kinematic data. We observed highest precisions for the kernel surrogate followed by the fourth order polynomial model. Both clearly outperformed the second order polynomial model which is equivalent to the commonly used stiffness matrix in neuro-musculoskeletal simulations.Hence, the proposed model architectures have the potential to improve the accuracy, and, therewith, validity of load predictions in neuro-musculoskeletal spine models.
  
@Article{Hammer2024,
  author    = {Hammer, Maria and Wenzel, Tizian and Santin, Gabriele and Meszaros-Beller, Laura and Little, Judith Paige and Haasdonk, Bernard and Schmitt, Syn},
  title     = {A new method to design energy-conserving surrogate models for the coupled, nonlinear responses of intervertebral discs},
  journal   = {Biomechanics and Modeling in Mechanobiology},
  year      = {2024},
  month     = {Jan},
  issn      = {1617-7940},
  doi       = {10.1007/s10237-023-01804-4},
  url       = {https://doi.org/10.1007/s10237-023-01804-4},
}
  

G. Cencetti, L. Lucchini, G. Santin, F. Battiston, E. Moro, A. Pentland, B. Lepri, Temporal clustering of social interactions trades-off disease spreading and knowledge diffusion, Journal of the Royal Society Interface (2024). Preprint Published
Non-pharmaceutical measures such as preventive quarantines, remote working, school and workplace closures, lockdowns, etc. have shown effectivenness from an epidemic control perspective; however they have also significant negative consequences on social life and relationships, work routines, and community engagement. In particular, complex ideas, work and school collaborations, innovative discoveries, and resilient norms formation and maintenance, which often require face-to-face interactions of two or more parties to be developed and synergically coordinated, are particularly affected. In this study, we propose an alternative hybrid solution that balances the slowdown of epidemic diffusion with the preservation of face-to-face interactions. Our approach involves a two-step partitioning of the population. First, we tune the level of node clustering, creating "social bubbles" with increased contacts within each bubble and fewer outside, while maintaining the average number of contacts in each network. Second, we tune the level of temporal clustering by pairing, for a certain time interval, nodes from specific social bubbles. Our results demonstrate that a hybrid approach can achieve better trade-offs between epidemic control and complex knowledge diffusion. The versatility of our model enables tuning and refining clustering levels to optimally achieve the desired trade-off, based on the potentially changing characteristics of a disease or knowledge diffusion process.
  
@Article{Cencetti2023,
  author    = {Cencetti, Giulia and Lucchini, Lorenzo and Santin, Gabriele and Battiston, Federico and Moro, Esteban and Pentland, Alex and Lepri, Bruno},
  title     = {Temporal clustering of social interactions trades-off disease spreading and knowledge diffusion},
  journal   = {Journal of The Royal Society Interface},
  year      = {2024},
  volume    = {21},
  number    = {210},
  pages     = {20230471},
  doi       = {10.1098/rsif.2023.0471},
  eprint    = {https://royalsocietypublishing.org/doi/pdf/10.1098/rsif.2023.0471},
  url       = {https://royalsocietypublishing.org/doi/abs/10.1098/rsif.2023.0471},
}
  

M. Pacini, X. Dong, B. Lepri, G. Santin, A Characterization Theorem for Equivariant Networks with Point-wise Activations, The Twelfth International Conference on Learning Representations (2024). Preprint Published
Equivariant neural networks have shown improved performance, expressiveness and sample complexity on symmetrical domains. But for some specific symmetries, representations, and choice of coordinates, the most common point-wise activations, such as ReLU, are not equivariant, hence they cannot be employed in the design of equivariant neural networks. The theorem we present in this paper describes all possibile combinations of representations, choice of coordinates and point-wise activations to obtain an equivariant layer, generalizing and strengthening existing characterizations. Notable cases of practical relevance are discussed as corollaries. Indeed, we prove that rotation-equivariant networks can only be invariant, as it happens for any network which is equivariant with respect to connected compact groups. Then, we discuss implications of our findings when applied to important instances of equivariant networks. First, we completely characterize permutation equivariant networks such as Invariant Graph Networks with point-wise nonlinearities and their geometric counterparts, highlighting a plethora of models whose expressive power and performance are still unknown. Second, we show that feature spaces of disentangled steerable convolutional neural networks are trivial representations.
  
@InProceedings{Pacini2024,
  author    = {Pacini, Marco and Dong, Xiaowen and Lepri, Bruno and Santin, Gabriele},
  title     = {A Characterization Theorem for Equivariant Networks with Point-wise Activations},
  booktitle = {The Twelfth International Conference on Learning Representations},
  year      = {2024},
  url       = {https://openreview.net/forum?id=79FVDdfoSR},
}
  

2023

G. Previati, G. Mastinu, E. Campi, M. Gobbi, L. Uccello, Á. Varela Daniel, A. Albanese, A. Roccasalva, G. Santin, M. Luca, B. Lepri, L. Ferrarotti, N. di Pietro, Roundabout Traffic: Simulation With Automated Vehicles, AI, 5G, Edge Computing and Human in the Loop, ASME 2023 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference (2023). Preprint Published
The aim of the paper is to assess how the traffic of roundabouts could be organized in the future. A mixed traffic is supposed to occur, featuring both fully automated vehicle and vehicles driven by humans. The case study is a part of a comprehensive research funded by European Commission, focusing on how improving the 5G network by Artificial Intelligence (AI) and edge computing. The traffic flow into a reference roundabout is simulated by SUMO, a Reinforcement Learning (RL) algorithm is derived and drives the automated vehicles into the roundabout. By means of a dynamic driving simulator, a real human drives into the simulated traffic of the roundabout. The human-in-the-loop is needed to check whether the driver feels comfortable and safe while driving in presence of automated cars. The preliminary results are quite promising. Drivers seem not worried when they interact with Connected and Automated Vehicles (CAVs) into the roundabout. They seem preferring to share the roundabout with CAVs as they seem feeling the traffic to be more fluent. Such preliminary results deserve an extensive statistical investigation for a final assessment.
  
@InProceedings{Previati2023,
  author    = {Previati, Giorgio and Mastinu, Gianpiero and Campi, Elena and Gobbi, Massimiliano and Uccello, Lorenzo and Varela Daniel, Álvaro and Albanese, Antonino and Roccasalva, Alessandro and Santin, Gabriele and Luca, Massimiliano and Lepri, Bruno and Ferrarotti, Laura and di Pietro, Nicola},
  title     = {Roundabout Traffic: Simulation With Automated Vehicles, AI, 5G, Edge Computing and Human in the Loop},
  year      = {2023},
  volume    = {Volume 1: 25th International Conference on Advanced Vehicle Technologies (AVT)},
  series    = {International Design Engineering Technical Conferences and Computers and Information in Engineering Conference},
  month     = {08},
  pages     = {V001T01A016},
  doi       = {10.1115/DETC2023-116402},
}
  

A. Longa, V. Lachi, G. Santin, M. Bianchini, B. Lepri, P. Lio, F. Scarselli, A. Passerini, Graph Neural Networks for temporal graphs: State of the art, open challenges, and opportunities, Transactions on Machine Learning Research (2023). Preprint Published
Graph Neural Networks (GNNs) have become the leading paradigm for learning on (static) graph-structured data. However, many real-world systems are dynamic in nature, since the graph and node/edge attributes change over time. In recent years, GNN-based models for temporal graphs have emerged as a promising area of research to extend the capabilities of GNNs. In this work, we provide the first comprehensive overview of the current state-of-the-art of temporal GNN, introducing a rigorous formalization of learning settings and tasks and a novel taxonomy categorizing existing approaches in terms of how the temporal aspect is represented and processed. We conclude the survey with a discussion of the most relevant open challenges for the field, from both research and application perspectives.
  
@Article{Longa2023,
  author    = {Antonio Longa and Veronica Lachi and Gabriele Santin and Monica Bianchini and Bruno Lepri and Pietro Lio and franco scarselli and Andrea Passerini},
  title     = {Graph Neural Networks for Temporal Graphs: State of the Art, Open Challenges, and Opportunities},
  journal   = {Transactions on Machine Learning Research},
  year      = {2023},
  issn      = {2835-8856},
  url       = {https://openreview.net/forum?id=pHCdMat0gI},
}
  

A. Alla, H. Oliveira, G. Santin, HJB-RBF based approach for the control of PDEs, Journal of Scientific Computing (2023). Preprint Published
Semi-lagrangian schemes for discretization of the dynamic programming principle are based on a time discretization projected on a state-space grid. The use of a structured grid makes this approach not feasible for high-dimensional problems due to the curse of dimensionality. Here, we present a new approach for infinite horizon optimal control problems where the value function is computed using Radial Basis Functions (RBF) by the Shepard's moving least squares approximation method on scattered grids. We propose a new method to generate a scattered mesh driven by the dynamics and the selection of the shape parameter in the RBF using an optimization routine. This mesh will help to localize the problem and approximate the dynamic programming principle in high dimension. Error estimates for the value function are also provided. Numerical tests for high dimensional problems will show the effectiveness of the proposed method.
  
@Article{Alla2023,
  author    = {Alla, Alessandro and Oliveira, Hugo and Santin, Gabriele},
  title     = {HJB-RBF Based Approach for the Control of PDEs},
  journal   = {Journal of Scientific Computing},
  year      = {2023},
  volume    = {96},
  number    = {1},
  month     = {May},
  pages     = {25},
  issn      = {1573-7691},
  doi       = {10.1007/s10915-023-02208-3},
  url       = {https://doi.org/10.1007/s10915-023-02208-3},
}
  

2022

G. Elefante, W. Erb, F. Marchetti, E. Perracchione, D. Poggiali, G. Santin, Interpolation with the polynomial kernels , Dolomites Res. Notes Approx. (2022). Preprint Published
The polynomial kernels are widely used in machine learning and they are one of the default choices to develop kernel-based classification and regression models. However, they are rarely used and considered in numerical analysis due to their lack of strict positive definiteness. In particular they do not enjoy the usual property of unisolvency for arbitrary point sets, which is one of the key properties used to build kernel-based interpolation methods. This paper is devoted to establish some initial results for the study of these kernels, and their related interpolation algorithms, in the context of approximation theory. We will first prove necessary and sufficient conditions on point sets which guarantee the existence and uniqueness of an interpolant. We will then study the Reproducing Kernel Hilbert Spaces (or native spaces) of these kernels and their norms, and provide inclusion relations between spaces corresponding to different kernel parameters. With these spaces at hand, it will be further possible to derive generic error estimates which apply to sufficiently smooth functions, thus escaping the native space. Finally, we will show how to employ an efficient stable algorithm to these kernels to obtain accurate interpolants, and we will test them in some numerical experiment. After this analysis several computational and theoretical aspects remain open, and we will outline possible further research directions in a concluding section. This work builds some bridges between kernel and polynomial interpolation, two topics to which the authors, to different extents, have been introduced under the supervision or through the work of Stefano De Marchi. For this reason, they wish to dedicate this work to him in the occasion of his 60th birthday.
  
@Article{Elefante2022,
  Title                    = {Interpolation with the polynomial kernels},
  Author                   = {Elefante, Giacomo and Erb, Wolfgang and Marchetti, Francesco and Perracchione, Emma and Poggiali, Davide and Santin, Gabriele},
  Journal                  = {Dolomites Res. Notes Approx.},
  Year                     = {2022},
  Pages                    = {45--60},
  Volume                   = {15},
  Fjournal                 = {Dolomites Research Notes on Approximation},
  Sjournal                 = {Dolomites Res.\ Notes Approx.},
  Url                      = {https://drna.padovauniversitypress.it/2022/4/5},
  Doi                      = {http://dx.doi.org/10.14658/pupj-drna-2022-4-5}
}
  

F. Marchetti, G. Santin, Convergence results in image interpolation with the continuous SSIM, SIAM Journal on Imaging Sciences (2022). Preprint Published
Assessing the similarity of two images is a complex task that attracts significant efforts in the image processing community. The widely used Structural Similarity Index Measure (SSIM) addresses this problem by quantifying a perceptual structural similarity. In this paper we consider a recently introduced continuous SSIM (cSSIM), which allows one to analyse sequences of images of increasingly fine resolutions, and further extend the definition of the index to encompass the locally weighted version that is used in practice. For both the local and the global versions, we prove that the continuous index includes the classical SSIM as a special case, and we provide a precise connection between image similarity measured by the cSSIM and by the $L_2$ norm. Using this connection, we derive bounds on the cSSIM by means of bounds on the $L_2$ error, and we even prove that the two error measures are equivalent in certain circumstances. We exploit these results to obtain precise rates of convergence with respect to the cSSIM for several concrete image interpolation methods, and we further validate these findings by different numerical experiments. This newly established connection paves the way to obtain novel insights into the features and limitations of the SSIM, including on the effect of the local weighted window on the index performances.
  
@Article{Marchetti2022,
  author    = {Marchetti, Francesco and Santin, Gabriele},
  title     = {Convergence Results in Image Interpolation With the Continuous SSIM},
  journal   = {SIAM Journal on Imaging Sciences},
  year      = {2022},
  volume    = {15},
  number    = {4},
  pages     = {1977-1999},
  doi       = {10.1137/22M147637X},
  url       = {https://doi.org/10.1137/22M147637X},
}
  

S. Cuomo, W. Erb, G. Santin, Kernel-Based Models for Influence Maximization on Graphs based on Gaussian Process Variance Minimization, Journal of Computational and Applied Mathematics (2022). Preprint Published
The inference of novel knowledge, the discovery of hidden patterns, and the uncovering of insights from large amounts of data from a multitude of sources make Data Science (DS) to an art rather than just a mere scientific discipline. The study and design of mathematical models able to analyze information represents a central research topic in DS. In this work, we introduce and investigate a novel model for influence maximization (IM) on graphs using ideas from kernel-based approximation, Gaussian process regression, and the minimization of a corresponding variance term. Data-driven approaches can be applied to determine proper kernels for this IM model and machine learning methodologies are adopted to tune the model parameters. Compared to stochastic models in this field that rely on costly Monte-Carlo simulations, our model allows for a simple and cost-efficient update strategy to compute optimal influencing nodes on a graph. In several numerical experiments, we show the properties and benefits of this new model.
  
@Article{Cuomo2022,
  author    = {Salvatore Cuomo and Wolfgang Erb and Gabriele Santin},
  title     = {Kernel-based models for influence maximization on graphs based on Gaussian process variance minimization},
  journal   = {Journal of Computational and Applied Mathematics},
  year      = {2022},
  pages     = {114951},
  issn      = {0377-0427},
  doi       = {https://doi.org/10.1016/j.cam.2022.114951},
  url       = {https://www.sciencedirect.com/science/article/pii/S0377042722005490},
}
  

T. Wenzel, G. Santin, B. Haasdonk, Analysis of target data-dependent greedy kernel algorithms: Convergence rates for f-, f P- and f/P-greedy, Constructive Approximation (2022). Preprint Published
Data-dependent greedy algorithms in kernel spaces are known to provide fast converging interpolants, while being extremely easy to implement and efficient to run. Despite this experimental evidence, no detailed theory has yet been presented. This situation is unsatisfactory especially when compared to the case of the data-independent P-greedy algorithm, for which optimal convergence rates are available, despite its performances being usually inferior to the ones of target data-dependent algorithms. In this work we fill this gap by first defining a new scale of greedy algorithms for interpolation that comprises all the existing ones in a unique analysis, where the degree of dependency of the selection criterion on the functional data is quantified by a real parameter. We then prove new convergence rates where this degree is taken into account and we show that, possibly up to a logarithmic factor, target data-dependent selection strategies provide faster convergence. In particular, for the first time we obtain convergence rates for target data adaptive interpolation that are faster than the ones given by uniform points, without the need of any special assumption on the target function. The rates are confirmed by a number of examples.
  
@Article{Wenzel2022c,
  author    = {Wenzel, Tizian and Santin, Gabriele and Haasdonk, Bernard},
  title     = {Analysis of Target Data-Dependent Greedy Kernel Algorithms: Convergence Rates for f-, {\$}{\$}f {\backslash}cdot P{\$}{\$}- and f/P-Greedy},
  journal   = {Constructive Approximation},
  year      = {2022},
  month     = {Oct},
  issn      = {1432-0940},
  doi       = {10.1007/s00365-022-09592-3},
  url       = {https://doi.org/10.1007/s00365-022-09592-3},
  day       = {18},
}
  

R. Campagna, S. De Marchi, E. Perracchione, G. Santin, Stable interpolation with exponential-polynomial splines and node selection via greedy algorithms , Advances in Computational Mathematics (2022). Preprint Published
In this work we extend some ideas about greedy algorithms, which are well-established tools for e.g. kernel bases, and exponential-polynomial splines whose main drawback consists in possible overfitting and consequent oscillations of the approximant. To partially overcome this issue, we develop some results on theoretically optimal interpolation points. Moreover, we introduce two algorithms which perform an adaptive selection of the spline interpolation points based on the minimization either of the sample residuals ($f$-greedy), or of an upper bound for the approximation error based on the spline Lebesgue function ($\lambda$-greedy). Both methods allow us to obtain an adaptive selection of the sampling points, i.e. the spline nodes. While the $f$-greedy selection is tailored to one specific target function, the $\lambda$-greedy algorithm enables us to define target-data-independent interpolation nodes.
  
@Article{Campagna2022,
  author    = {Campagna, R. and De Marchi, S. and Perracchione, E. and Santin, G.},
  title     = {Stable interpolation with exponential-polynomial splines and node selection via greedy algorithms},
  journal   = {Advances in Computational Mathematics},
  year      = {2022},
  volume    = {48},
  number    = {6},
  month     = {Oct},
  pages     = {69},
  issn      = {1572-9044},
  doi       = {10.1007/s10444-022-09986-8},
  url       = {https://doi.org/10.1007/s10444-022-09986-8},
  day       = {27},
}
  

G. Santin, Equally spaced points are optimal for Brownian Bridge kernel interpolation, Applied Mathematics Letters (2022). Preprint Published
In this paper we show how ideas from spline theory can be used to construct a local basis for the space of translates of a general iterated Brownian Bridge kernel $k_{\beta,\varepsilon}$ for $\beta\in\mathbb{N}$, $\varepsilon\geq 0$. In the simple case $\beta=1$, we derive an explicit formula for the corresponding Lagrange basis, which allows us to solve interpolation problems without inverting any linear system. We use this basis to prove that interpolation with $k_{1,\varepsilon}$ is uniformly stable, i.e., the Lebesgue constant is bounded independently of the number and location of the interpolation points, and that equally spaced points are the unique minimizers of the associated power function, and are thus error optimal. In this derivation, we investigate the role of the shape parameter $\varepsilon>0$, and discuss its effect on these error and stability bounds. Some of the ideas discussed in this paper could be extended to more general Green kernels.
  
@Article{Santin2022b,
  author    = {Gabriele Santin},
  title     = {Equally spaced points are optimal for Brownian Bridge kernel interpolation},
  journal   = {Applied Mathematics Letters},
  year      = {2022},
  pages     = {108489},
  issn      = {0893-9659},
  doi       = {https://doi.org/10.1016/j.aml.2022.108489},
}
  

G. Santin, I. Skarbovsky, F. Fournier, B. Lepri, A Framework for Verifiable and Auditable Collaborative Anomaly Detection, IEEE Access (2022). Preprint Published
Collaborative and Federated Leaning are emerging approaches to manage cooperation between a group of agents for the solution of Machine Learning tasks, with the goal of improving each agent's performance without disclosing any data. In this paper we present a novel algorithmic architecture that tackle this problem in the particular case of Anomaly Detection (or classification of rare events), a setting where typical applications often comprise data with sensible information, but where the scarcity of anomalous examples encourages collaboration. We show how Random Forests can be used as a tool for the development of accurate classifiers with an effective insight-sharing mechanism that does not break the data integrity. Moreover, we explain how the new architecture can be readily integrated in a blockchain infrastructure to ensure the verifiable and auditable execution of the algorithm. Furthermore, we discuss how this work may set the basis for a more general approach for the design of collaborative ensemble-learning methods beyond the specific task and architecture discussed in this paper.
  
@Article{Santin2022,
  author    = {Santin, Gabriele and Skarbovsky, Inna and Fournier, Fabiana and Lepri, Bruno},
  title     = {A Framework for Verifiable and Auditable Collaborative Anomaly Detection},
  journal   = {IEEE Access},
  year      = {2022},
  pages     = {1-1},
  doi       = {10.1109/ACCESS.2022.3196391},
}
  

E. Leoni, G. Cencetti, G. Santin, T. Istomin, D. Molteni, G. P. Picco, E. Farella, B. Lepri, A. M. Murphy, Measuring close proximity interactions in summer camps during the COVID-19 pandemic, EPJ Data Science (2022). Preprint Published
Policy makers have implemented multiple non-pharmaceutical strategies to mitigate the COVID-19 worldwide crisis. Interventions had the aim of reducing close proximity interactions, which drive the spread of the disease. A deeper knowledge of human physical interactions has revealed necessary, especially in all settings involving children, whose education and gathering activities should be preserved. Despite their relevance, almost no data are available on close proximity contacts among children in schools or other educational settings during the pandemic. Contact data are usually gathered via Bluetooth, which nonetheless offers a low temporal and spatial resolution. Recently, ultra-wideband (UWB) radios emerged as a more accurate alternative that nonetheless exhibits a significantly higher energy consumption, limiting in-field studies. In this paper, we leverage a novel approach,embodied by the Janus system that combines these radios by exploiting their complementary benefits. The very accurate proximity data gathered in-field by Janus, once augmented with several metadata, unlocks unprecedented levels of information, enabling the development of novel multi-level risk analyses. By means of this technology, we have collected real contact data of children and educators in three summer camps during summer 2020in the province of Trento, Italy. The wide variety of performed daily activities induced multiple individual behaviors, allowing a rich investigation of social environments from the contagion risk perspective. We consider risk based on duration and proximity of contacts and classify interactions according to different risk levels. We can then evaluate the summer camps’ organization, observe the effect of partition in small groups, or social bubbles, and identify the organized activities that mitigate the riskier behaviors. Overall, we offer an insight into the educator-child and child-child social interactions during the pandemic, thus providing a valuable tool for schools, summer camps, and policy makers to (re)structure educational activities safely.
  
@Article{Leoni2022,
  author    = {Leoni, Elia and Cencetti, Giulia and Santin, Gabriele and Istomin, Timofei and Molteni, Davide and Picco, Gian Pietro and Farella, Elisabetta and Lepri, Bruno and Murphy, Amy L.},
  title     = {Measuring close proximity interactions in summer camps during the COVID-19 pandemic},
  journal   = {EPJ Data Science},
  year      = {2022},
  volume    = {11},
  number    = {1},
  month     = {Jan},
  pages     = {5},
  issn      = {2193-1127},
  doi       = {10.1140/epjds/s13688-022-00316-y},
  url       = {https://doi.org/10.1140/epjds/s13688-022-00316-y},
}
  

B. Nobile, G. Santin, B. Lepri, P. Brutti, Reprogramming FairGANs with Variational Auto-Encoders: A New Transfer Learning Model, Book of short papers, SIS2022 - 51st Scientific Meeting of the Italian Statistical Society (2022). Preprint Published
Fairness-aware GANs (FairGANs) exploit the mechanisms of Generative Adversarial Networks (GANs) to impose fairness on the generated data, freeing them from both disparate impact and disparate treatment. Given the model's advantages and performance, we introduce a novel learning framework to transfer a pre-trained FairGAN to other tasks. This reprogramming process has the goal of maintaining the FairGAN's main targets of data utility, classification utility, and data fairness, while widening its applicability and ease of use. In this paper we present the technical extensions required to adapt the original architecture to this new framework (and in particular the use of Variational Auto-Encoders), and discuss the benefits, trade-offs, and limitations of the new model.
  
@InProceedings{Nobile2022,
  author    = {Beatrice Nobile and Gabriele Santin and Bruno Lepri and Pierpaolo Brutti},
  title     = {Reprogramming FairGANs with Variational Auto-Encoders: A New Transfer Learning Model},
  booktitle = {Book of short papers. SIS 2022},
  year      = {2022},
  editor    = {Antonio Balzanella, Matilde Bini, Carlo Cavicchia, Rosanna Verde},
  isbn      = {9788891932310},
}
  

T. Wenzel, M. Kurz, A. Beck, G. Santin, B. Haasdonk, Structured Deep Kernel Networks for Data-Driven Closure Terms of Turbulent Flows, Proceedings of the 13th International Conference on Large Scale Scientific Computations (2022). Preprint Published
Standard kernel methods for machine learning usually struggle when dealing with large datasets. We review a recently introduced Structured Deep Kernel Network (SDKN) approach that is capable of dealing with high-dimensional and huge datasets - and enjoys typical standard machine learning approximation properties. We extend the SDKN to combine it with standard machine learning modules and compare it with Neural Networks on the scientific challenge of data-driven prediction of closure terms of turbulent flows. We show experimentally that the SDKNs are capable of dealing with large datasets and achieve near-perfect accuracy on the given application.
  
@InProceedings{Wenzel2022a,
  author    = {Wenzel, Tizian and Kurz, Marius and Beck, Andrea and Santin, Gabriele and Haasdonk, Bernard},
  title     = {Structured Deep Kernel Networks for Data-Driven Closure Terms of Turbulent Flows},
  booktitle = {Large-Scale Scientific Computing},
  year      = {2022},
  editor    = {Lirkov, Ivan and Margenov, Svetozar},
  publisher = {Springer International Publishing},
  isbn      = {978-3-030-97549-4},
  pages     = {410--418},
  address   = {Cham},
}
  

2021

G. Santin, B. Haasdonk, Kernel Methods for Surrogate Modeling, Model Order Reduction, Volume 1: System- and Data-Driven Methods and Algorithms, P. Benner, W. Schilders, S. Grivet-Talocia, A. Quarteroni, G. Rozza, L. M. Silveira Eds. (2021). Preprint Published
This chapter deals with kernel methods as a special class of techniques for surrogate modeling. Kernel methods have proven to be efficient in machine learning, pattern recognition and signal analysis due to their flexibility, excellent experimental performance and elegant functional. analytic background. These data-based techniques provide so called kernel expansions, i.e., linear combinations of kernel functions which are generated from given input-output point samples that may be arbitrarily scattered. In particular, these techniques are meshless, do not require or depend on a grid, hence are less prone to the curse of dimensionality, even for high-dimensional problems. In contrast to projection-based model reduction, we do not necessarily assume a high-dimensional model, but a general function that models input-output behavior within some simulation context. This could be some micro-model in a multiscale-simulation, some submodel in a coupled system, some initialization function for solvers, coefficient function in PDEs, etc. First, kernel surrogates can be useful if the input-output function is expensive to evaluate, e.g. is a result of a finite element simulation. Here, acceleration can be obtained by sparse kernel expansions. Second, if a function is available only via measurements or a few function evaluation samples, kernel approximation techniques can provide function surrogates that allow global evaluation. We present some important kernel approximation techniques, which are kernel interpolation, greedy kernel approximation and support vector regression. Pseudo-code is provided for ease of reproducibility. In order to illustrate the main features, commonalities and differences, we compare these techniques on a real-world application. The experiments clearly indicate the enormous acceleration potential
  
@inbook{Santin2021b,
      author    = {Gabriele Santin and Bernard Haasdonk},
      title     = {Kernel methods for surrogate modeling},
      booktitle = {Volume 1 System- and Data-Driven Methods and Algorithms},
      year      = {2021},
      editor    = {Peter Benner and Stefano Grivet-Talocia and Alfio Quarteroni and Gianluigi Rozza and Wil Schilders and Luís Miguel Silveira},
      publisher = {De Gruyter},
      pages     = {311--354},
      doi       = {doi:10.1515/9783110498967-009},
}
  

B. Haasdonk, B. Hamzi, G. Santin, D. Wittwar, Kernel methods for center manifold approximation and a weak data-based version of the Center Manifold Theorem, Physica D: Nonlinear Phenomena (2021). Preprint Published
For dynamical systems with a non hyperbolic equilibrium, it is possible to significantly simplify the study of stability by means of the center manifold theory. This theory allows to isolate the complicated asymptotic behavior of the system close to the equilibrium point and to obtain meaningful predictions of its behavior by analyzing a reduced order system on the so-called center manifold. Since the center manifold is usually not known, good approximation methods are important as the center manifold theorem states that the stability properties of the origin of the reduced order system are the same as those of the origin of the full order system. In this work, we establish a data-based version of the center manifold theorem that works by considering an approximation in place of an exact manifold. Also the error between the approximated and the original reduced dynamics are quantified. We then use an apposite data-based kernel method to construct a suitable approximation of the manifold close to the equilibrium, which is compatible with our general error theory. The data are collected by repeated numerical simulation of the full system by means of a high-accuracy solver, which generates sets of discrete trajectories that are then used as a training set. The method is tested on different examples which show promising performance and good accuracy.
  
@article{Haasdonk2021b,
      title     = {Kernel methods for center manifold approximation and a weak data-based version of the Center Manifold Theorem},
      author    = {Bernard Haasdonk and Boumediene Hamzi and Gabriele Santin and Dominik Wittwar},
      journal   = {Physica D: Nonlinear Phenomena},
      volume    = {427},
      pages     = {133007},
      year      = {2021},
      issn      = {0167-2789},
      doi       = {https://doi.org/10.1016/j.physd.2021.133007},
}
  

B. Haasdonk, T. Wenzel, G. Santin, S. Schmitt, Biomechanical surrogate modelling using stabilized vectorial greedy kernel methods, Numerical Mathematics and Advanced Applications ENUMATH 2019, F. J. Vermolen, C. Vuik, Eds (2021). Preprint Published
Greedy kernel approximation algorithms are successful techniques for sparse and accurate data-based modelling and function approximation. Based on a recent idea of stabilization (Wenzel et al., A novel class of stabilized greedy kernel approximation algorithms: convergence, stability & uniform point distribution. e-prints. arXiv:1911.04352, 2019) of such algorithms in the scalar output case, we here consider the vectorial extension built on VKOGA (Wirtz and Haasdonk, Dolomites Res Notes Approx 6:83–100, 2013. We introduce the so called $\gamma$-restricted VKOGA, comment on analytical properties and present numerical evaluation on data from a clinically relevant application, the modelling of the human spine. The experiments show that the new stabilized algorithms result in improved accuracy and stability over the non-stabilized algorithms.
  
@InProceedings{Haasdonk2021a,
  author                   = {Haasdonk, Bernard and Wenzel, Tizian and Santin, Gabriele and Schmitt, Syn},
  title                    = {Biomechanical Surrogate Modelling Using Stabilized Vectorial Greedy Kernel Methods},
  booktitle                = {Numerical Mathematics and Advanced Applications ENUMATH 2019},
  year                     = {2021},
  editor                   = {Vermolen, Fred J. and Vuik, Cornelis},
  publisher                = {Springer International Publishing},
  isbn                     = {978-3-030-55874-1},
  pages                    = {499--508},
  doi      		   = {https://doi.org/10.1007/978-3-030-55874-1_49},
  address                  = {Cham},
}
  

G. Santin, T. Karvonen, B. Haasdonk, Sampling based approximation of linear functionals in Reproducing Kernel Hilbert Spaces, BIT (2021). Preprint Published
In this paper we analyze a greedy procedure to approximate a linear functional defined in a reproducing kernel Hilbert space by nodal values. This procedure computes a quadrature rule which can be applied to general functionals. For a large class of functionals, that includes integration functionals and other interesting cases, but does not include differentiation, we prove convergence results for the approximation by means of quasi-uniform and greedy points which generalize in various ways several known results. A perturbation analysis of the weights and node computation is also discussed. Beyond the theoretical investigations, we demonstrate numerically that our algorithm is effective in treating various integration densities, and that it is even very competitive when compared to existing methods for Uncertainty Quantification.
  
@Article{Santin2021,
  author    = {Santin, Gabriele and Karvonen, Toni and Haasdonk, Bernard},
  title     = {Sampling based approximation of linear functionals in reproducing kernel Hilbert spaces},
  journal   = {BIT Numerical Mathematics},
  year      = {2021},
  month     = {Apr},
  issn      = {1572-9125},
  doi       = {10.1007/s10543-021-00870-3},
  url       = {https://doi.org/10.1007/s10543-021-00870-3},
  day       = {13},
}
  

G. Cencetti, G. Santin, A. Longa, E. Pigani, A. Barrat, C. Cattuto, S. Lehmann, M. Salathé, B. Lepri, Digital proximity tracing on empirical contact networks for pandemic control, Nature Commun. (2021). Preprint Published
Digital contact tracing is a relevant tool to control infectious disease outbreaks, including the COVID-19 epidemic. Early work evaluating digital contact tracing omitted important features and heterogeneities of real-world contact patterns influencing contagion dynamics. We fill this gap with a modeling framework informed by empirical high-resolution contact data to analyze the impact of digital contact tracing in the COVID-19 pandemic. We investigate how well contact tracing apps, coupled with the quarantine of identified contacts, can mitigate the spread in real environments. We find that restrictive policies are more effective in containing the epidemic but come at the cost of unnecessary large-scale quarantines. Policy evaluation through their efficiency and cost results in optimized solutions which only consider contacts longer than 15–20 minutes and closer than 2–3 meters to be at risk. Our results show that isolation and tracing can help control re-emerging outbreaks when some conditions are met: (i) a reduction of the reproductive number through masks and physical distance; (ii) a low-delay isolation of infected individuals; (iii) a high compliance. Finally, we observe the inefficacy of a less privacy-preserving tracing involving second order contacts. Our results may inform digital contact tracing efforts currently being implemented across several countries worldwide.
  
@Article{Cencetti2021,
  author		   = {Cencetti, G. and Santin, G. and Longa, A. and Pigani, E. and Barrat, A. and Cattuto, C. and Lehmann, S. and Salath{\'e}, M. and Lepri, B.},
  title  		   = {Digital proximity tracing on empirical contact networks for pandemic control},
  journal  		   = {Nature Communications},
  year     		   = {2021},
  volume   		   = {12},
  number   		   = {1},
  month     		   = {Mar},
  pages    		   = {1655},
  issn     		   = {2041-1723},
  doi      		   = {10.1038/s41467-021-21809-w},
  url      		   = {https://doi.org/10.1038/s41467-021-21809-w},
  day      		   = {12},
}
  

T. Wenzel, G. Santin, B. Haasdonk, A novel class of stabilized greedy kernel approximation algorithms: Convergence, stability and uniform point distribution, J. of Approx. Theory (2021). Preprint Published
Kernel based methods provide a way to reconstruct potentially high-dimensional functions from meshfree samples, i.e., sampling points and corresponding target values. A crucial ingredient for this to be successful is the distribution of the sampling points. Since the computation of an optimal selection of sampling points may be an infeasible task, one promising option is to use greedy methods. Although these methods may be very effective, depending on the specific greedy criterion the chosen points might quickly lead to instabilities in the computation. To circumvent this problem, we introduce and investigate a new class of stabilized greedy kernel algorithms, which can be used to create a scale of new selection strategies. We analyze these algorithms, and in particular we prove convergence results and quantify in a precise way the distribution of the selected points. These results allow to prove, in the case of certain Sobolev kernels, that the algorithms have optimal stability and optimal convergence rates, including for functions outside the native space of the kernel. The results also apply to the case of the usual $P$-greedy algorithm, significantly improving state-of-the-art results available in the literature. Illustrative experiments are presented that support the theoretical findings and show improvements of the stabilized algorithms in terms of accuracy due to improved stability.
  

  

2020

B. Haasdonk, B. Hamzi, G. Santin, D. Wittwar, Greedy kernel methods for center manifold approximation, Spectral and High Order Methods for Partial Differential Equations ICOSAHOM 2018, S. Sherwin, D. Moxey, J. Peiró, P. E. Vincent, and C. Schwab Eds. (2020). Preprint Published
For certain dynamical systems it is possible to significantly simplify the study of stability by means of the center manifold theory. This theory allows to isolate the complicated asymptotic behavior of the system close to a non-hyperbolic equilibrium point, and to obtain meaningful predictions of its behavior by analyzing a reduced dimensional problem. Since the manifold is usually not known, approximation methods are of great interest to obtain qualitative estimates. In this work, we use a data-based greedy kernel method to construct a suitable approximation of the manifold close to the equilibrium. The data are collected by repeated numerical simulation of the full system by means of a high-accuracy solver, which generates sets of discrete trajectories that are then used to construct a surrogate model of the manifold. The method is tested on different examples which show promising performance and good accuracy.
  
@InProceedings{Haasdonk2020,
  author                   = {Haasdonk, Bernard and Hamzi, Boumediene and Santin, Gabriele and Wittwar, Dominik},
  title                    = {Greedy Kernel Methods for Center Manifold Approximation},
  booktitle                = {Spectral and High Order Methods for Partial Differential Equations ICOSAHOM 2018},
  year                     = {2020},
  editor                   = {Sherwin, Spencer J. and Moxey, David and Peir{\'o}, Joaquim and Vincent, Peter E. and Schwab, Christoph},
  publisher                = {Springer International Publishing},
  isbn                     = {978-3-030-39647-3},
  pages                    = {95--106},
  doi                      = {10.1007/978-3-030-39647-3_6},
  url                      = {http://doi.org/10.1007/978-3-030-39647-3_6},
  address                  = {Cham},

}
  

2019

T. Brünnette, G. Santin, and B. Haasdonk, Greedy kernel methods for accelerating implicit integrators for parametric ODEs, Numerical Mathematics and Advanced Applications ENUMATH 2017, F.A. Radu, K. Kumar, I. Berre, J.M. Nordbotten, I.S. Pop (2019). Preprint Published
We present a novel acceleration method for the solution of parametric ODEs by single-step implicit solvers by means of greedy kernel-based surrogate models. In an offline phase, a set of trajectories is precomputed with a high-accuracy ODE solver for a selected set of parameter samples, and used to train a kernel model which predicts the next point in the trajectory as a function of the last one. This model is cheap to evaluate, and it is used in an online phase for new parameter samples to provide a good initialization point for the nonlinear solver of the implicit integrator. The accuracy of the surrogate reflects into a reduction of the number of iterations until convergence of the solver, thus providing an overall speedup of the full simulation. Interestingly, in addition to providing an acceleration, the accuracy of the solution is maintained, since the ODE solver is still used to guarantee the required precision. Although the method can be applied to a large variety of solvers and different ODEs, we will present in details its use with the Implicit Euler method for the solution of the Burgers equation, which results to be a meaningful test case to demonstrate the method’s features.
  
@InProceedings{Bruennette2019,
  Title                    = {Greedy Kernel Methods for Accelerating Implicit Integrators for Parametric {ODE}s},
  Author                   = {Br{\"u}nnette, Tim and Santin, Gabriele and Haasdonk, Bernard},
  Booktitle                = {Numerical Mathematics and Advanced Applications - ENUMATH 2017},
  Year                     = {2019},

  Address                  = {Cham},
  Editor                   = {Radu, Florin Adrian and Kumar, Kundan and Berre, Inga and Nordbotten, Jan Martin and Pop, Iuliu Sorin},
  Pages                    = {889--896},
  Publisher                = {Springer International Publishing},
  ISBN                     = {978-3-319-96415-7},
}
  

M. Köppel, F. Franzelin, I. Kröker, G. Santin, D. Wittwar, S. Oladyshkin, A. Barth, B. Haasdonk, W. Nowak, D. Pflüger, C. Rohde, Comparison of data-driven uncertainty quantification methods for a carbon dioxide storage benchmark scenario, Comput. Geosci (2019). Preprint Published
A variety of methods is available to quantify uncertainties arising within the modeling of flow and transport in carbon dioxide storage, but there is a lack of thorough comparisons. Usually, raw data from such storage sites can hardly be described by theoretical statistical distributions since only very limited data is available. Hence, exact information on distribution shapes for all uncertain parameters is very rare in realistic applications. We discuss and compare four different methods tested for data-driven uncertainty quantification based on a benchmark scenario of carbon dioxide storage. In the benchmark, for which we provide data and code, carbon dioxide is injected into a saline aquifer modeled by the nonlinear capillarity-free fractional flow formulation for two incompressible fluid phases, namely carbon dioxide and brine. To cover different aspects of uncertainty quantification, we incorporate various sources of uncertainty such as uncertainty of boundary conditions, of parameters in constitutive relations, and of material properties. We consider recent versions of the following non-intrusive and intrusive uncertainty quantification methods: arbitrary polynomial chaos, spatially adaptive sparse grids, kernel-based greedy interpolation, and hybrid stochastic Galerkin. The performance of each approach is demonstrated assessing expectation value and standard deviation of the carbon dioxide saturation against a reference statistic based on Monte Carlo sampling. We compare the convergence of all methods reporting on accuracy with respect to the number of model runs and resolution. Finally, we offer suggestions about the methods’ advantages and disadvantages that can guide the modeler for uncertainty quantification in carbon dioxide storage and beyond.
  
@Article{Koeppel2019,
  Title                    = {Comparison of data-driven uncertainty quantification methods for a carbon dioxide storage benchmark scenario},
  Author                   = {K{\"o}ppel, Markus and Franzelin, Fabian and Kr{\"o}ker, Ilja and Oladyshkin, Sergey and Santin, Gabriele and Wittwar, Dominik and Barth, Andrea and Haasdonk, Bernard and Nowak, Wolfgang and Pfl{\"u}ger, Dirk and Rohde, Christian},
  Journal                  = {Computational Geosciences},
  Year                     = {2019},
  Month                    = {Apr},
  Number                   = {2},
  Pages                    = {339--354},
  Volume                   = {23},
  Day                      = {01},
  Doi                      = {10.1007/s10596-018-9785-x},
  Url                      = {https://doi.org/10.1007/s10596-018-9785-x}
}
  

2018

D. Wittwar, G. Santin, B. Haasdonk, Interpolation with uncoupled separable matrix-valued kernels, Dolomites Res. Notes Approx. (2018). Preprint Published
In this paper we consider the problem of approximating vector-valued functions over a domain $\Omega$. For this purpose, we use matrix-valued reproducing kernels, which can be related to Reproducing kernel Hilbert spaces of vectorial functions and which can be viewed as an extension of the scalar-valued case. These spaces seem promising, when modelling correlations between the target function components, as the components are not learned independently of each other. We focus on the interpolation with such matrix-valued kernels. We derive error bounds for the interpolation error in terms of a generalized power-function and we introduce a subclass of matrix-valued kernels whose power-functions can be traced back to the power-function of scalar-valued reproducing kernels. Finally, we apply these kind of kernels to some artificial data to illustrate the benefit of interpolation with matrix-valued kernels in comparison to a componentwise approach.
  
@Article{Wittwar2018,
  Title                    = {Interpolation with uncoupled separable matrix-valued kernels},
  Author                   = {Wittwar, Dominik and Santin, Gabriele and Haasdonk, Bernard},
  Journal                  = {Dolomites Res. Notes Approx.},
  Year                     = {2018},
  Pages                    = {23--29},
  Volume                   = {11},
  Doi                      = {10.14658/pupj-drna-2018-3-4},
  Fjournal                 = {Dolomites Research Notes on Approximation},
  Sjournal                 = {Dolomites Res.\ Notes Approx.},
  Url                      = {https://drna.padovauniversitypress.it/2018/3/4}
}
  

T. Köppl, G. Santin, B. Haasdonk, R. Helmig, Numerical modelling of a peripheral arterial stenosis using dimensionally reduced models and kernel methods, Int. J. Numer. Meth. Biomed. Engng. (2018). Preprint Published
In this work, we consider 2 kinds of model reduction techniques to simulate blood flow through the largest systemic arteries, where a stenosis is located in a peripheral artery, i.e., in an artery that is located far away from the heart. For our simulations, we place the stenosis in one of the tibial arteries belonging to the right lower leg (right posterior tibial artery). The model reduction techniques that are used are on the one hand dimensionally reduced models (1-D and 0-D models, the so-called mixed-dimension model) and on the other hand surrogate models produced by kernel methods. Both methods are combined in such a way that the mixed-dimension models yield training data for the surrogate model, where the surrogate model is parametrised by the degree of narrowing of the peripheral stenosis. By means of a well-trained surrogate model, we show that simulation data can be reproduced with a satisfactory accuracy and that parameter optimisation or state estimation problems can be solved in a very efficient way. Furthermore, it is demonstrated that a surrogate model enables us to present after a very short simulation time the impact of a varying degree of stenosis on blood flow, obtaining a speedup of several orders over the full model.
  
@Article{Koeppl2018,
  Title                    = {Numerical modelling of a peripheral arterial stenosis using dimensionally reduced models and kernel methods},
  Author                   = {K{\"o}ppl, Tobias and Santin, Gabriele and Haasdonk, Bernard and Helmig, Rainer},
  Journal                  = {International Journal for Numerical Methods in Biomedical Engineering},
  Year                     = {2018},
  Note                     = {e3095 cnm.3095},
  Number                   = {8},
  Pages                    = {e3095},
  Volume                   = {34},
  Doi                      = {10.1002/cnm.3095},
  Eprint                   = {https://onlinelibrary.wiley.com/doi/pdf/10.1002/cnm.3095},
  Keywords                 = {blood flow simulations, peripheral stenosis, dimensionally reduced models, mixed‐dimension models, kernel methods, surrogate models, real‐time simulations},
  Url                      = {https://onlinelibrary.wiley.com/doi/abs/10.1002/cnm.3095}
}
  

S. De Marchi, A. Iske, G. Santin, Image reconstruction from scattered Radon data by weighted positive definite kernel functions, Calcolo (2018). Preprint Published
Proceedings of Approximation Theory 15, San Antonio (Texas), Springer Proceedings on Mathematics and Statistics
  
@Article{DeMarchi2018,
  Title                    = {Image reconstruction from scattered Radon data by weighted positive definite kernel functions},
  Author                   = {De Marchi, S. and Iske, A. and Santin, G.},
  Journal                  = {Calcolo},
  Year                     = {2018},
  Month                    = {Feb},
  Number                   = {1},
  Pages                    = {2},
  Volume                   = {55},
  Day                      = {02},
  Doi                      = {10.1007/s10092-018-0247-6},
  ISSN                     = {1126-5434},
  Url                      = {https://doi.org/10.1007/s10092-018-0247-6}
}


  

G. Santin, B. Haasdonk, Greedy kernel approximation for sparse surrogate modelling, Reduced-Order Modeling (ROM) for Simulation and Optimization: Powerful Algorithms as Key Enablers for Scientific Computing, W. Keiper, A. Milde, and S. Volkwein, Eds (2018). Preprint Published
Modern simulation scenarios frequently require multi-query or real-time responses of simulation models for statistical analysis, optimization, or process control. However, the underlying simulation models may be very time-consuming rendering the simulation task difficult or infeasible. This motivates the need for rapidly computable surrogate models. We address the case of surrogate modeling of functions from vectorial input to vectorial output spaces. These appear, for instance, in simulation of coupled models or in the case of approximating general input–output maps. We review some recent methods and theoretical results in the field of greedy kernel approximation schemes. In particular, we recall the vectorial kernel orthogonal greedy algorithm (VKOGA) for approximating vector-valued functions. We collect some recent convergence statements that provide sound foundation for these algorithms, in particular quasi-optimal convergence rates in case of kernels inducing Sobolev spaces. We provide some initial experiments that can be obtained with non-symmetric greedy kernel approximation schemes. The results indicate better stability and overall more accurate models in situations where the input data locations are not equally distributed.
  
@InCollection{Haasdonk2018,
  Title                    = {Greedy Kernel Approximation for Sparse Surrogate Modeling},
  Author                   = {Haasdonk, Bernard and Santin, Gabriele},
  Booktitle                = {Reduced-Order Modeling (ROM) for Simulation and Optimization: Powerful Algorithms as Key Enablers for Scientific Computing},
  Publisher                = {Springer International Publishing},
  Year                     = {2018},
  Address                  = {Cham},
  Editor                   = {Keiper, Winfried and Milde, Anja and Volkwein, Stefan},
  Pages                    = {21--45},
  Doi                      = {10.1007/978-3-319-75319-5_2},
  ISBN                     = {978-3-319-75319-5},
  Url                      = {https://doi.org/10.1007/978-3-319-75319-5_2}
}
  

2017

S. De Marchi, A. Idda, G. Santin, A rescaled method for RBF approximation, Proceedings of Approximation Theory 15, San Antonio (Texas), Springer Proceedings on Mathematics and Statistics (2017). Preprint Published
In the recent paper [1], a new method to compute stable kernel-based interpolants has been presented. This rescaled interpolation method combines the standard kernel interpolation with a properly defined rescaling operation, which smooths the oscillations of the interpolant. Although promising, this procedure lacks a systematic theoretical investigation. Through our analysis, this novel method can be understood as standard kernel interpolation by means of a properly rescaled kernel. This point of view allows us to consider its error and stability properties.
  
@InBook{DeMarchi2017,
  Title                    = {A Rescaled Method for RBF Approximation},
  Author                   = {De Marchi, Stefano and Idda, Andrea and Santin, Gabriele},
  Editor                   = {Fasshauer, Gregory E. and Schumaker, Larry L.},
  Pages                    = {39--59},
  Publisher                = {Springer International Publishing},
  Year                     = {2017},
  Address                  = {Cham},
  Booktitle                = {Approximation Theory XV: San Antonio 2016},
  Doi                      = {10.1007/978-3-319-59912-0_3},
  ISBN                     = {978-3-319-59912-0},
  Url                      = {https://doi.org/10.1007/978-3-319-59912-0_3}
}
  

G. Santin, B. Haasdonk, Convergence rate of the data-independent P-greedy algorithm in kernel-based spaces, Dolomites Res. Notes Approx. (2017). Preprint Published
Kernel-based methods provide flexible and accurate algorithms for the reconstruction of functions from meshless samples. A major question in the use of such methods is the influence of the samples’ locations on the behavior of the approximation, and feasible optimal strategies are not known for general problems.Nevertheless, efficient and greedy point-selection strategies are known. This paper gives a proof of the convergence rate of the data-independent P-greedy algorithm, based on the application of the convergence theory for greedy algorithms in reduced basis methods. The resulting rate of convergence is shown to be quasi-optimal in the case of kernels generating Sobolev spaces.As a consequence, this convergence rate proves that, for kernels of Sobolev spaces, the points selected by the algorithm are asymptotically uniformly distributed, as conjectured in the paper where the algorithm has been introduced.
  
@Article{Santin2016b,
  Title                    = {Convergence rate of the data-independent {P}-greedy algorithm in kernel-based approximation},
  Author                   = {G. Santin and B. Haasdonk},
  Journal                  = {Dolomites Res. Notes Approx.},
  Year                     = {2017},
  Pages                    = {68--78},
  Volume                   = {10},
  Fjournal                 = {Dolomites Research Notes on Approximation},
  Sjournal                 = {Dolomites Res.\ Notes Approx.},
  Url                      = {www.emis.de/journals/DRNA/9-2.html}
}
  

R. Cavoretto, S. De Marchi, A. De Rossi, E. Perracchione, G. Santin, Partition of unity interpolation using stable kernel-based techniques, Appl. Numer. Math. (2017). Preprint Published
In this paper we propose a new stable and accurate approximation technique which is extremely effective for interpolating large scattered data sets. The Partition of Unity (PU) method is performed considering Radial Basis Functions (RBFs) as local approximants and using locally supported weights. In particular, the approach consists in computing, for each PU subdomain, a stable basis. Such technique, taking advantage of the local scheme, leads to a significant benefit in terms of stability, especially for flat kernels. Furthermore, an optimized searching procedure is applied to build the local stable bases, thus rendering the method more efficient.
  
@Article{Cavoretto2016b,
  Title                    = {Partition of unity interpolation using stable kernel-based techniques},
  Author                   = {Cavoretto, Roberto and De Marchi, Stefano and De Rossi, Alessandra and Perracchione, Emma and Santin, Gabriele},
  Journal                  = {Applied Numerical Mathematics},
  Year                     = {2016},
  Doi                      = {10.1016/j.apnum.2016.07.005},
  Url                      = {http://dx.doi.org/10.1016/j.apnum.2016.07.005}
}
  

2016

R. Cavoretto, S. De Marchi, A. De Rossi, E. Perracchione, G. Santin, Approximating basins of attraction for dynamical systems via stable radial bases, AIP Conf. Proc. (2016). Preprint Published
In applied sciences it is often required to model and supervise temporal evolution of populations via dynamical systems. In this paper, we focus on the problem of approximating the basins of attraction of such models for each stable equilibrium point. We propose to reconstruct the basins via an implicit interpolant using stable radial bases, obtaining the surfaces by partitioning the phase space into disjoint regions. An application to a competition model presenting jointly three stable equilibria is considered.
  
@InProceedings{Cavoretto2016a,
  Title                    = {Approximating basins of attraction for dynamical systems via stable radial bases},
  Author                   = {Cavoretto, Roberto and De Marchi, Stefano and De Rossi, Alessandra and Perracchione, Emma and Santin, Gabriele},
  Booktitle                = {AIP Conf. Proc.},
  Year                     = {2016},
  Doi                      = {10.1063/1.4952177},
  Url                      = {http://dx.doi.org/10.1063/1.4952177}
}
  

G. Santin, R. Schaback, Approximation of eigenfunctions in kernel-based spaces, Adv. Comput. Math. (2016). Preprint Published
Kernel-based methods in Numerical Analysis have the advantage of yielding optimal recovery processes in the “native” Hilbert space $\mathcal H$ in which they are reproducing. Continuous kernels on compact domains have an expansion into eigenfunctions that are both $L_2$-orthonormal and orthogonal in $\mathcal H$ (Mercer expansion). This paper examines the corresponding eigenspaces and proves that they have optimality properties among all other subspaces of $\mathcal H$. These results have strong connections to $n$-widths in Approximation Theory, and they establish that errors of optimal approximations are closely related to the decay of the eigenvalues. Though the eigenspaces and eigenvalues are not readily available, they can be well approximated using the standard $n$-dimensional subspaces spanned by translates of the kernel with respect to $n$ nodes or centers. We give error bounds for the numerical approximation of the eigensystem via such subspaces. A series of examples shows that our numerical technique via a greedy point selection strategy allows to calculate the eigensystems with good accuracy.
  
@Article{Santin2016a,
  author    = {Santin, Gabriele and Schaback, Robert},
  title     = {Approximation of eigenfunctions in kernel-based spaces},
  journal   = {Adv. Comput. Math.},
  year      = {2016},
  volume    = {42},
  number    = {4},
  pages     = {973--993},
  issn      = {1572-9044},
  doi       = {10.1007/s10444-015-9449-5},
  url       = {http://dx.doi.org/10.1007/s10444-015-9449-5},
  fjournal  = {Advances in Computational Mathematics},
}
  

2015

S. De Marchi, G. Santin, Fast computation of orthonormal basis for RBF spaces through Krylov space methods, BIT (2015). Preprint Published
In recent years, in the setting of radial basis function, the study of approximation algorithms has particularly focused on the construction of (stable) bases for the associated Hilbert spaces. One of the ways of describing such spaces and their properties is the study of a particular integral operator and its spectrum. We proposed in a recent work the so-called WSVD basis, which is strictly connected to the eigen-decomposition of this operator and allows to overcome some problems related to the stability of the computation of the approximant for a wide class of radial kernels. Although effective, this basis is computationally expensive to compute. In this paper we discuss a method to improve and compute in a fast way the basis using methods related to Krylov subspaces. After reviewing the connections between the two bases, we concentrate on the properties of the new one, describing its behavior by numerical tests.
  
@Article{DeMarchi2015a,
  Title                    = {Fast computation of orthonormal basis for RBF spaces through Krylov space methods},
  Author                   = {De Marchi, Stefano and Santin, Gabriele},
  Journal                  = {BIT Numerical Mathematics},
  Year                     = {2015},
  Number                   = {4},
  Pages                    = {949--966},
  Volume                   = {55},
  Doi                      = {10.1007/s10543-014-0537-6},
  ISSN                     = {0006-3835},
  Publisher                = {Springer Netherlands},
  Url                      = {http://dx.doi.org/10.1007/s10543-014-0537-6}
}
  

R. Cavoretto, S. De Marchi, A. De Rossi, E. Perracchione, G. Santin, RBF approximation of large datasets by partition of unity and local stabilization, CMMSE 2015: Proceedings of the 15th International Conference on Mathematical Methods in Science and Engineering (2015). Preprint Published
We present an algorithm to approximate large datasets by Radial Basis Function(RBF) techniques. The method couples a fast domain decomposition procedure with a localized stabilization method. The resulting algorithm can efficiently deal with large problems and it is robust with respect to the typical instability of kernel methods.
  
@InProceedings{Cavoretto2015,
  Title                    = {RBF approximation of large datasets by partition of unity and local stabilization},
  Author                   = {Cavoretto, Roberto and De Marchi, Stefano and De Rossi, Alessandra and Perracchione, Emma and Santin, Gabriele},
  Booktitle                = {CMMSE 2015 : Proceedings of the 15th International Conference on Mathematical Methods in Science and Engineering},
  Year                     = {2015},
  Editor                   = {Vigo-Aguiar, J.},
  Pages                    = {317--326},
  ISBN                     = {978-84-617-2230-3},
  ISSN                     = {2312-0177},
}
  

2013

S. De Marchi, G. Santin, A new stable basis for radial basis function interpolation, J. Comput. Appl. (2013). Preprint Published
It is well-known that radial basis function interpolants suffer of bad conditioning if the basis of translates is used. In the recent work [5], the authors gave a quite general way to build stable and orthonormal bases for the native space ${\mathcal{N}_{\Phi}(\Omega)}$ associated to a kernel $\Phi$ on a domain $\Omega \subset \mathbb{R}^s$. The method is simply based on the factorization of the corresponding kernel matrix. \\Starting from that setting we describe a particular basis which turns out to be orthonormal in ${\mathcal{N}_{\Phi}(\Omega)}$ and in $\ell_{2,w}(X)$, where $X$ is a set of data sites of the domain $\Omega$. The basis arises from a weighted singular value decomposition of the kernel matrix. This basis is also related to a discretization of the compact operator $T_{\Phi}: {\mathcal{N}_{\Phi}(\Omega)}\rightarrow{\mathcal{N}_{\Phi}(\Omega)}$, $$T_{\Phi}[f](x) = \int_{\Omega} \Phi(x,y) f(y) dy\quad \forall x\in\Omega$$ and provides a connection with the continuous basis that arises from an eigen-decomposition of $T_{\Phi}$. Finally, using the eigenvalues of this operator, we provide convergence estimates and stability bounds for interpolation and discrete least-squares approximation.
  
@Article{DeMarchi2013,
  Title                    = {A new stable basis for radial basis function interpolation},
  Author                   = {De Marchi, Stefano and Santin, Gabriele},
  Journal                  = {J. Comput. Appl. Math.},
  Year                     = {2013},
  Pages                    = {1--13},
  Volume                   = {253},
  Doi                      = {10.1016/j.cam.2013.03.048},
  Fjournal                 = {Journal of Computational and Applied Mathematics},
  ISSN                     = {0377--0427},
  Url                      = {http://dx.doi.org/10.1016/j.cam.2013.03.048}
}
  

2011

G. Santin, A. Sommariva, M. Vianello, An algebraic cubature formula on curvilinear polygons, Appl. Math. Comput. (2011). Preprint Published
We implement in Matlab a Gauss-like cubature formula on bivariate domains whose boundary is a piecewise smooth Jordan curve (curvilinear polygons). The key tools are Green’s integral formula, together with the recent software package Chebfun to approximate the boundary curve close to machine precision by piecewise Chebyshev interpolation. Several tests are presented, including some comparisons of this new routine ChebfunGauss with the recent SplineGauss that approximates the boundary by splines.
  
@Article{Santin2011,
  Title                    = {An algebraic cubature formula on curvilinear polygons},
  Author                   = {Santin, Gabriele and Sommariva, Alvise and Vianello, Marco},
  Journal                  = {Applied Mathematics and Computation},
  Year                     = {2011},
  Number                   = {24},
  Pages                    = {10003--10015},
  Volume                   = {217},
  Doi                      = {10.1016/j.amc.2011.04.071},
  Fjournal                 = {Appl. Math. Comput.},
  ISSN                     = {0096-3003},
  Mrclass                  = {65D30 (65D32)},
  Mrnumber                 = {2806387},
  Url                      = {http://dx.doi.org/10.1016/j.amc.2011.04.071}
}