Name: Yang, Kaifeng Title: Associate Professor, Dr. Office: Information Building #108 Tel: TBA Email: kfyang@nwafu.edu.cn |
Who am I? |
---|
Kaifeng Yang, Ph.D. in Mathematics and Natural Sciences, is an Associate Professor. His primary research interests include (single and multi-objective) optimization, explainable artificial intelligence, machine learning, and statistics. He has led and participated in several international research projects, including major projects funded by the Austrian Science Fund (FWF), the Netherlands Organization for Scientific Research (NWO), and the European Union’s Research and Innovation Programs. In recent years, Dr. Yang has published numerous papers in prestigious international journals and top-tier conferences, such as Swarm and Evolutionary Computation, Journal of Global Optimization, and the International Conference on Machine Learning (ICML). Many of these publications appear in high-impact journals (with the highest impact factor reaching 11.5). His research has been applied by internationally renowned institutions including Facebook, MIT, Tsinghua University, and Peking University across various fields. Dr. Yang served as a guest editor for the Journal of Industrial Information Integration (Impact Factor: 15.6) and has been invited multiple times to deliver keynote speeches at international academic conferences. He has received several research awards, including the High-Tech Talent Recruitment Award from the Austrian Research Promotion Agency (FFG) and the High-Tech Talent Tax Credit from the Dutch government. According to Google Scholar, as of April 1, 2025, his publications have been cited 921 times, with an h-index of 13 and an i10-index of 21. Google Scholar profile: https://scholar.google.com/citations?user=7WkpjGwAAAAJ&hl=en Research Topics/Interests
Education
Work Experience:
Honors and Awards:
Selected Invited Academic Presentation:
|
Previous Research Contributions |
---|
Decreasing the computational complexity of HV-based acquisition functions: Expected hypervolume improvement (EHVI) is a famous acquisition function but its exact computation is expensive. I decreased the computational complexity of expected hypervolume improvement (EHVI) from O(𝒏𝟑 𝐥𝐨𝐠 𝒏) and O(𝒏𝟒 𝐥𝐨𝐠 𝒏) into asymptotically optimal 𝚯(𝒏𝒍𝒐𝒈 𝒏) in both 2-D and 3-D cases. In addition, my generalized formula of EHVI for high-dimensional objective space (𝑚 ≥ 4) serves as a benchmark of EHVI for many-objective cases. The generalized formula of partitioning the non-dominated space of EHVI also works for the exact computation of other acquisition functions (e.g., PoI, TEHVI, etc.) with the same computational complexity of EHVI. The results of this work have been utilized by MIT, Facebook, NASA,and some research institutes for both scientific research (such as hyperparameter tuning for machine learning) and real-world applications. Parallelization in Bayesian Optimization: To boost the execution time of MOBO, I proposed two different ways to allow multiple solutions’ evaluation of expensive objective functions: ● Using the q-PoI as the acquisition function, ● Using the TEHVI or TPoI by the following steps: 1) partitioning an objective space into several subspaces; 2) setting the truncated domain of TEHVI/TPoI based in the lower and upper bounds of each subspace; 3) using TEHVI/TPoI as the acquisition function to search multiple solutions parallelly in different subspaces. Extending the Gaussian process to mixed-integer problems: I have used a heterogeneous metric to compute the distance in the Gaussian process, which replaces the Euclidean metric. This proposed method extends the application of the Gaussian process from a continuous space to a mixed-integer space. This technique has been utilized by LUCM and HRI-EU for the prediction of Parkinson's disease and car crashes, respectively. Proposing explainable surrogate models with capability of quantifying predictions’ uncertainties: By using the linear mapping characteristic of Gaussian distribution, I have used a hybrid approach combining genetic programming based symbolic regression and Gaussian process, where the Gaussian mean is predicted by genetic-programming-based symbolic regression, and the Gaussian variance is approximated by Gaussian process. Proposing two different methods to quantify the posterior covariance of multi-task Gaussian process (MTGP): In MTGP, the linear model of coregionalization (LMC) is commonly used to quantify the correlation between different tasks, but it simply assumes a linear correlation and objective correlation is independent from the observation. The newly proposed methods solve these limitations. Deriving gradients of some acquisition functions, including EHVI, TEHVI, PoI, TPoI: Exact computation of an acquisition function’s gradient decreases MOBO execution time and allows well-established mathematical optimization algorithms (e.g., gradient ascent/descent algorithms, even Newton methods, etc.) to replace evolutionary algorithms. The result of EHVI gradient has been utilized by EPSON. Incorporating two types of correlation information, including multiple-point PoI (q-PoI) and correlated PoI (cPoI). For q-PoI, I proposed and derived the explicit formulas to compute multiple-point PoI (q-PoI) for 𝑞 ≥ 2 and 𝑚 ≥ 2 in five distinct cases, which take the covariance of multiple points for a specific objective. This theoretical result provides an effective and straightforward approach for MOBO to parallelly search for multiple optimal solutions in one iteration. For CPoI, I proposed and derived the explicit formula of CPoI (𝑚 ≥ 2) that considers the objective correlations in multi-objective acquisition functions. This work provides an alternative to existing techniques for correlated multi-objective problems. Deriving and utilizing the probability density function of HVI, including bound confidence (UCB) and 𝜖 − probability of hypervolume improvement (𝝐 − 𝑷𝒐𝑯𝑽𝑰). Theoretically, the EHVI only considers the first moment of HVI's distribution, and thus its value will be less trustworthy/meaningful when the prediction uncertainty is large. To avoid this theoretical disadvantage, I derived the explicit probability density function of HVI by using the Taylor expansion. Based on the formula PDF of HVI, I proposed the UCB to exactly compute quantile with any hypervolume improvement and 𝝐 − 𝑷𝒐𝑯𝑽𝑰 to exactly compute the probability of at least 𝜖 HVI. Deriving the formula of computing the expectation of R2 (expected R2, ER2): The user-defined utopian (or ideal) point in R2 is friendly to domain experts as it can be interpreted as an ideal or the best solution in an objective space. However, the exact computation of ER2 was not available. In this research, both the approximation method and explicit formula are provided to compute the ER2 using the Chebychev utility function in bi-objective cases. Integrating domain-expert prior-knowledge into acquisition functions, including truncated expected hypervolume improvement (TEHVI) and truncated probability of improvement (TPoI). The research methodology employs a truncated domain of the Gaussian distribution, specifically tailored for the objective space. This truncation can be customized based on the prior knowledge provided by domain experts. The result of this work is utilized by Tohoku University and Institut Teknologi Bandung for CFD optimization problems and has also been applied to solving preference-based multi-objective optimization problems. |
Teaching |
---|
2023 - 2024: Python, Master Course, University of Veterinary Medicine Vienna, Austria 2023 - 2024: Machine Learning, Master Course, University of Applied Science Upper Austria, Austria 2018 - 2019: Reinforcement Learning, Master Course, Leiden University, The Netherlands 2017 - 2018: Natural Computing, Bachelor Course, Leiden University, The Netherlands 2016 - 2019: Evolutionary Algorithms, Master Course, Leiden University, The Netherlands |
Publications |
---|
Peer-reviewed Journal Papers: [10] Bogdan Burlacu, Kaifeng Yang, and Michael Affenzeller. “Population diversity and inheritance in genetic programming for symbolic regression”. In: Natural Computing (Jan. 2023). issn: 1572-9796. doi:10.1007/s11047-022-09934-x. IF: 2.1 (CCF C, JCR Q3) [9] Khurram Mushtaq, Runmin Zou, Asim Waris, Kaifeng Yang, Ji Wang, Javaid Iqbal, and Mohammed Jameel. “Multivariate wind power curve modeling using multivariate adaptive regression splines and regression trees”. In:Plos one 18.8 (2023), e0290316. IF: 3.7 (JCR Q2) [8] Kaifeng Yang, Michael Affenzeller, and Guozhi Dong. “A parallel technique for multi-objective Bayesian global optimization: Using a batch selection of probability of improvement”. In: Swarm and Evolutionary Computation 75 (2022), p. 101183. issn: 2210-6502. doi: https: //doi.org/10.1016/j.swevo.2022.101183. IF: 10.3(JCR Q1) [7] Runmin Zou, Mengmeng Song, Yun Wang, Ji Wang, Kaifeng Yang, and Michael Affenzeller. “Deep non-crossing probabilistic wind speed forecasting with multi-scale features”. In: Energy Conversion and Management257 (2022), p. 115433. issn: 0196-8904. doi: https://doi. org/10.1016/j.enconman.2022.115433. IF: 11.5(JCR Q1) [6] Kaifeng Yang, Michael Emmerich, André Deutz, and Thomas Bäck. “Multi-Objective Bayesian Global Optimization using expected hypervolume improvement gradient”. In: Swarm and Evolutionary Computation 44 (2019), pp. 945–956. issn: 2210-6502. doi: https: //doi.org/10.1016/j.swevo.2018.10.007. IF: 10.3 (the only corresponding author,JCR Q1) [5] Duc Van Nguyen, Marios Kefalas, Kaifeng Yang, Asteris Apostolidis, Markus Olhofer, Steffen Limmer, and THW Bäck. “A review: Prognostics and health management in automotive and aerospace”. In: International Journal of Prognostics and Health Management 10.2 (2019), p. 35. (SJR Q2) [4] Duc Van Nguyen, Steffen Limmer, Kaifeng Yang, Markus Olhofer, and Thomas Bäck. “Modeling and Prediction of Remaining Useful Lifetime for Maintenance Scheduling Optimization of a Car Fleet”. In: International Journal of Performability Engineering 15.9 (2019), p. 2318. (SJR Q3) [3] Kaifeng Yang, Michael Emmerich, André Deutz, and Thomas Bäck. “Efficient computation of expected hypervolume improvement using box decomposition algorithms”. In: Journal of Global Optimization 75.1 (Sept. 2019), pp. 3–34. issn: 1573-2916. doi: 10.1007/s10898- 019-00798-7. IF: 2.1 (CCF B ,the only corresponding author, JCR Q2) [2] Kaifeng Yang and Ji Wang. “A Review on the Algorithms of Distribution Network Reconfiguration”. In: Southern Power System Technology 4 (2013), pp. 022–028. [1] Kaifeng Yang, Ji Wang, and Hui Peng. “Implementation of Neuro-Fuzzy controller for smartcar based on fuzzyTECH”. In: Journal of Northwest A & F University (Natural Science Edition) 40.012 (2012), pp. 230–234. Peer-reviewed Conference Papers: [28] Xilu Wang, Kaifeng Yang, Peng Liao, Mengxuan Zhang, Yaochu Jin. “Efficient Federated Bayesian Optimization with Symbolic Regression Model”. In: 2025 IEEE Congress on Evolutionary Computation (CEC). (Accepted) [27] Hao Wang, Kaifeng Yang, Michael Affenzeller. “Probability Distribution of Hypervolume Improvement in Bi-objective Bayesian Optimization”. In: International Conference on Machine Learning. ICML ’24. 2024. (the only corresponding author, CCF A, Qualis A1) [26] Kirill Antonov, Roman Kalkreuth, Kaifeng Yang, Thomas Bäck, Niki van Stein, and Anna V. Kononova. “A Functional Analysis Approach to Symbolic Regression”. In: The Genetic and Evolutionary Computation Conference. GECCO ’24. 2024. (CCF C, Qualis A1 ) [25] Kaifeng Yang, Bernhard Werth, and Michael Affenzeller. “Age-Layer-Population-Structure with Self- Adaptation in Optimization”. In: The International Conference on Computer Aided Systems Theory. EUROCAST ’24. 2024. (in print, Qualis B3) [24] Fu Xing Long, Diederick Vermetten, Anna V. Kononova, Roman Kalkreuth, Kaifeng Yang, Thomas Bäck, and Niki van Stein. “Challenges of ELA-guided Function Evolution using Genetic Programming”. In: The 15thInternational Joint Conference on Computational Intelligence. IJCCI 2023. 2023. (in print) [23] Hao Wang and Kaifeng Yang. “Bayesian Optimization”. In: Many-Criteria Optimization and Decision Analysis: State-of-the-Art, Present Challenges, and Future Perspectives. Ed. by Dimo Brockhoff, Michael Emmerich, Boris Naujoks, and Robin Purshouse. Cham: Springer International Publishing, 2023, pp. 271–297. isbn: 978-3-031-25263-1. doi: 10.1007/978- 3-031-25263-1_10. [22] Bernhard Werth, Johannes Karder, Andreas Beham, Erik Pitzer, Kaifeng Yang, and Stefan Wagner. “Walking through the Quadratic Assignment-Instance Space: Algorithm Performance and Landscape Measures”. In: Proceedings of the Companion Conference on Genetic and Evolutionary Computation. GECCO ’23. Lisbon,Portugal, 2023. doi: 10.1145/3583133. 3596374. (CCF C, Qualis A1) [21] Kaifeng Yang and Michael Affenzeller. “Surrogate-assisted Multi-objective Optimization via Genetic Programming Based Symbolic Regression”. In: Evolutionary Multi-Criterion Optimization. Ed. by Michael Emmerich, André Deutz, Hao Wang, Anna V. Kononova, Boris Naujoks, Ke Li, Kaisa Miettinen, and Iryna Yevseyeva. Cham: Springer Nature Switzerland, 2023, pp. 176–190. isbn: 978-3-031-27250-9. (the only corresponding author, Qualis A2) [20] Kaifeng Yang, Kai Chen, Michael Affenzeller, and Bernhard Werth. “A New Acquisition Function for Multi-objective Bayesian Optimization: Correlated Probability of Improvement”. In: Proceedings of the Companion Conference on Genetic and Evolutionary Computation. GECCO ’23. Lisbon, Portugal, 2023. doi: 10.1145/3583133.3596325. (CCF C, Qualis A1) [19] Kaifeng Yang, Sixuan Liu, Michael Affenzeller, and Guozhi Dong. “Gradients of Acquisition Functions for Bi-objective Bayesian Optimization”. In: 2023 19th International Conference on Natural Computation, FuzzySystems and Knowledge Discovery (ICNC-FSKD). ICNC-FSKD ’23. 2023, pp. 1–9. doi: 10.1109/ICNC-FSKD59587.2023.10280812. [18] Michael Affenzeller, Michael Bögl, Lukas Fischer, Florian Sobieczky, Kaifeng Yang, and Jan Zenisek. “Prescriptive Analytics: When Data- and Simulation-based Models Interact in a Cooperative Way”. In: 2022 24th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC). 2022, pp. 1–8. doi: 10.1109/SYNASC57785. 2022.00009. ( Qualis C ) [17] Kaifeng Yang and Michael Affenzeller. “Quantifying Uncertainties of Residuals in Symbolic Regression via Kriging”. In: Procedia Computer Science 200 (2022). 3rd International Conference on Industry 4.0 and Smart Manufacturing, pp. 954–961. issn: 1877-0509. doi: https: //doi.org/10.1016/j.procs.2022.01.293. url: https://www.sciencedirect.com/ science/article/pii/S1877050922003027. [16] Michael Emmerich, Kaifeng Yang, and André H Deutz. “Infill criteria for multiobjective Bayesian optimization”. In: High-Performance Simulation-Based Optimization. Springer, 2020, pp. 3–16. [15] Koen van der Blom, Kaifeng Yang, Thomas Bäck, and Michael Emmerich. “Towards Multi-objective Mixed-Integer Evolution Strategies”. In: AIP Conference Proceedings 2070.1 (2019), p. 020046. doi:10.1063/1.5090013. eprint: https://aip.scitation.org/doi/pdf/10. 1063/1.5090013. [14] André Deutz, Michael Emmerich, and Kaifeng Yang. “The Expected R2-Indicator Improvement for Multi-objective Bayesian Optimization”. In: Evolutionary Multi-Criterion Optimization. Ed. by Kalyanmoy Deb, Erik Goodman, Carlos A. Coello Coello, Kathrin Klamroth, Kaisa Miettinen, Sanaz Mostaghim, and Patrick Reed. Cham: Springer International Publishing, 2019, pp. 359–370. (Qualis A2) [13] André Deutz, Kaifeng Yang, and Michael Emmerich. “The R2 Indicator: a Study of its Expected Improvement in Case of Two Objectives”. In: AIP Conference Proceedings 2070.1 (2019), p. 020054. doi: 10.1063/1.5090021. eprint: https://aip.scitation.org/doi/ pdf/10.1063/1.5090021. [12] Kaifeng Yang, Koen van der Blom, Thomas Bäck, and Michael Emmerich. “Towards Single- and Multiobjective Bayesian Global Optimization for Mixed Integer Problems”. In: AIP Conference Proceedings 2070.1 (2019), p. 020044. doi: 10.1063/1.5090011. eprint: https://aip.scitation.org/doi/pdf/10.1063/1.5090011. (the only corresponding author) [11] Kaifeng Yang, Pramudita Satria Palar, Michael Emmerich, Koji Shimoyama, and Thomas Bäck. “A Multi-point Mechanism of Expected Hypervolume Improvement for Parallel Multi-objective Bayesian Global Optimization”. In: Proceedings of the Genetic and Evolutionary Computation Conference. GECCO ’19. Prague, Czech Republic: ACM, 2019, pp. 656–663. isbn: 978-1-4503-6111-8. doi: 10.1145/3321707.3321784. (CCF C, Qualis A1)) [10] Pramudita Satria Palar, Kaifeng Yang, Koji Shimoyama, Michael Emmerich, and Thomas Bäck. “Multi- objective Aerodynamic Design with User Preference Using Truncated Expected Hypervolume Improvement”. In: Proceedings of the Genetic and Evolutionary Computation Conference. GECCO ’18. Kyoto, Japan: ACM, 2018, pp. 1333–1340. isbn: 978-1-4503-5618- 3. doi: 10.1145/3205455.3205497. (CCF C, Qualis A2, equal first author) [9] Kaifeng Yang, Michael Emmerich, André Deutz, and Carlos M Fonseca. “Computing 3-D Expected Hypervolume Improvement and Related Integrals in Asymptotically Optimal Time”. In: International Conference on Evolutionary Multi-Criterion Optimization. Ed. by Heike Trautmann, Günter Rudolph, Kathrin Klamroth, Oliver Schütze, Margaret Wiecek, Yaochu Jin, and Christian Grimme. Springer. Cham, 2017, pp. 685–700. (the only corresponding author, Qualis A2) [8] Yali Wang, Longmei Li, Kaifeng Yang, and Michael Emmerich. “A New Approach to Target Region Based Multiobjective Evolutionary Algorithms”. In: 2017 IEEE Congress on Evolutionary Computation (CEC). IEEE, 2017, pp. 1757–1764. (Qualis A2) [7] Michael Emmerich, Kaifeng Yang, André Deutz, Hao Wang, and Carlos M. Fonseca. “A Multicriteria Generalization of Bayesian Global Optimization”. In: Advances in Stochastic and Deterministic Global Optimization. Ed. by Panos M. Pardalos, Anatoly Zhigljavsky, and Julius Žilinskas. Cham: Springer, Nov. 2016, pp. 229–243. [6] Kaifeng Yang, Andre Deutz, Zhiwei Yang, Thomas Bäck, and Michael Emmerich. “Truncated expected hypervolume improvement: Exact computation and application”. In: 2016 IEEE Congress on Evolutionary Computation (CEC). IEEE. 2016, pp. 4350–4357. doi: 10.1109/ CEC.2016.7744343. (Qualis A2) [5] Kaifeng Yang, Longmei Li, André Deutz, Thomas Bäck, and Michael Emmerich. “Preference-based Multiobjective Optimization using Truncated Expected Hypervolume Improvement”. In: 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD). IEEE, 2016, pp. 276–281. doi: 10.1109/FSKD.2016.7603186. [4] Zhiwei Yang, Hao Wang, Kaifeng Yang, Thomas Bäck, and Michael Emmerich. “SMS- EMOA with multiple dynamic reference points”. In: 2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD). IEEE. 2016, pp. 282–288. [3] Iris Hupkens, André Deutz, Kaifeng Yang, and Michael Emmerich. “Faster exact algorithms for computing expected hypervolume improvement”. In: International Conference on Evolutionary Multi-Criterion Optimization. Ed. by António Gaspar-Cunha, Carlos Henggeler Antunes, and Carlos Coello Coello. Springer. Cham, 2015, pp. 65–79. (Qualis A2) [2] Kaifeng Yang, Daniel Gaida, Thomas Bäck, and Michael Emmerich. “Expected hypervolume improvement algorithm for PID controller tuning and the multiobjective dynamical control of a biogas plant”. In: 2015 IEEE Congress on Evolutionary Computation (CEC). May 2015, pp. 1934–1942. doi: 10.1109/CEC.2015.7257122. (Qualis A2) [1] Kaifeng Yang, Michael Emmerich, Rui Li, Ji Wang, and Thomas Bäck. “Power distribution network、 reconfiguration by evolutionary integer programming”. In: International Conference on Parallel Problem Solving from Nature–PPSN XIII. Ed. by Thomas Bartz-Beielstein, Jürgen Branke, Bogdan Filipič, and Jim Smith. Springer. Cham, 2014, pp. 11–23. (CCF B, Qualis A2) Book: [1] Kaifeng Yang. “Multi-objective Bayesian Global Optimization for Continuous Problems and Applications”. Leiden University, IBSN: 9789462998018, 2017 Miscellaneous: [3] Fu Xing Long, Diederick Vermetten, Anna V. Kononova, Roman Kalkreuth, Kaifeng Yang, Thomas Bäck, and Niki van Stein. Challenges of ELA-guided Function Evolution using Genetic Programming. 2023. arXiv:2305.15245 [cs.NE]. [2] Hao Wang, Kaifeng Yang, Michael Affenzeller. Probability Distribution of Hypervolume Improvement in Bi-objective Bayesian Optimization. 2022. arXiv: 2205.05505 [cs.LG]. [1] Stefan Niculae, Daniel Dichiu, Kaifeng Yang, and Thomas Bäck. Automating penetration testing using reinforcement learning. 2020. |
TBA |
---|
NAN |
Ph.D. and Master's Supervisions |
---|
06/2021 [Master] Mohammad Iman Sayyadzadeh, The First Supervisor, “Optimization of Hyper-parameters of Artificial Neural Networks using Genetic Algorithm ”, University of Applied Sciences Upper Austria 2018-2020 [Ph.D.] Marios Kefalas, The Second Supervisor The First Supervisor : Prof. Thomas Bäck, Leiden University 2018-2020 [Ph.D.] Duc Van Nguyen, The Second Supervisor, The First Supervisor: Prof. Thomas Bäck, Leiden University 06/2019 [Master] Jelle van den Berg, The First Supervisor, “Using AI to predict ICU patient mortality”, Leiden University 05/2019 [Master] Laurens Beljaards, The First Supervisor, “Towards Environmental Storytelling by Evolutionary Algorithms”, Leiden University 01/2019 [Master] Martijn J. Post, The Second Supervisor, “Tax data and reinforcement learning”, Leiden University 12/2018 [Master] Jelle van den Berg, The First Supervisor, “Artificial intelligence to make accurate predictions for Business Intelligence”, Leiden University 10/2018 [Master] Lan Jiaqi, The Second Supervisor, “Critical Water Infrastructure Sensor Placement Optimization”, Leiden University 05/2018 [Master] Wilco Verhoef, The Second Supervisor, “Convolutional Neural Networks for Automatic Classification of Radar Signals in Time Domain: Learning the Micro-Doppler Signature of Human Gait”, Leiden University 05/2018 [Master] Roy de Winter, The Second Supervisor, “Designing Ships using Constrained Multi-Objective Efficient Global Optimization”, Leiden University |