Simon Hirländer
Dr. Simon Hirländer leitet das Team „Smart Analytics & Reinforcement Learning“. Er konnte mehrere Jahre Erfahrung in internationaler Forschungstätigkeit am CERN sammeln, wo er auch seinen Doktor erhalten hat. In den letzten Jahren hat er sich erfolgreich auf die Optimierung der Performance des CERN-Beschleunigerkomplexes unter der Verwendung von maschinellem Lernen, im Speziellen der Anwendung bestärkenden Lernens, fokussiert.
Simon Hirländer

Position: Postdoc-Position - Leiter des Teams "Smart Analytics & Reinforcement Learning"
E-Mail: simon.hirlaender@plus.ac.at
Website: Persönliche Webseite
Datum Beginn: 01.09.2020
Veröffentlichungen
- G. Schäfer, S. Huber, S. Hirländer, et al.: Python-Based Reinforcement Learning on Simulink Models. (2024) https://doi.org/ https://doi.org/10.1007/978-3-031-65993-5_55
- S. Pochaba, R. Kwitt, S. Hirländer, et al.: Multi-agent Reinforcement Learning and Its Application to Wireless Network Communication. (2024) https://doi.org/ https://doi.org/10.1007/978-3-031-65993-5_45
- S. Hirländer, S. Pochaba, C. Xu, et al.: Deep Meta Reinforcement Learning for Rapid Adaptation In Linear Markov Decision Processes: Applications to CERN’s AWAKE Project. (2024) https://doi.org/ https://doi.org/10.1007/978-3-031-65993-5_21
- A. Santamaría, C. Xu, L. Scomparin, S. Hirländer, S. Pochaba, A. Eichler, J. Kaiser, M. Schenk : The Reinforcement Learning for Autonomous Accelerators Collaboration. (2024) https://doi.org/https://doi.org/10.18429/JACoW-IPAC2024-TUPS62
- S. Hirländer, S. Appel, N. Madysa: Data-Driven model predictive control for automated optimitization of injection into the SIS18 synchrotron. (2024) https://doi.org/ https://doi.org/10.18429/JACoW-IPAC2024-TUPS59
- S. Hirländer, L. Lamminger, S. Pochaba, J. Kaiser, C. Xu, A. Santamaría, L. Scomparin, V. Kain : Towards few-shot reinforcement learning in particle accelerator control. (2024) https://doi.org/ https://doi.org/10.18429/JACoW-IPAC2024-TUPS59
- R. Kozlica, G. Schäfer, S. Hirländer, S. Wegenkittl: A Modular Test Bed for Reinforcement Learning Incorporation into Industrial Applications. (2024) https://doi.org/ https://doi.org/10.1007/978-3-031-42171-6_15
- A. Oeftiger, S. Garcia, J. Lagrange, S. Hirländer : Active Deep Learning for Nonlinear Optics Design of a Vertical FFA Accelerato. (2023) https://doi.org/https://doi.org/10.18429/jacow-ipac2023-wepa026
- S. Hirländer, L. Lamminger, G. Zevi-Della-Pora, V. Kain : Ultra fast reinforcement learning in accelerator control demonstrated on CERN AWAKE. (2023) https://doi.org/https://doi.org/10.18429/jacow-ipac2023-thpl038
- R. Kozlica, S. Wegenkittl, S. Hirländer: Deep Q-Learning versus Proximal Policy Optimization: Performance Comparison in a Material Sorting Task. (2023) https://doi.org/https://doi.org/10.1109/isie51358.2023.10228056
- F.M. Velotti, B. Goddard, V. Kain, R. Ramjiawan, G. Zevi Della Porta, S. Hirländer: Towards automatic setup of 18 MeV electron beamline using machine learning. (2023) https://doi.org/ https://doi.org/10.1088/2632-2153/acce21
- F.M. Velotti, B. Goddard, V. Kain, R. Ramjiawan, G.Z.D. Porta, S. Hirländer: Automatic setup of 18 MeV electron beamline using machine learning. (2022) https://doi.org/ https://doi.org/10.48550/arXiv.2209.03183
- L. Grech, G. Valentino, D. Alves, S. Hirländer: Application of reinforcement learning in the LHC tune feedback. (2022) https://doi.org/ https://doi.org/10.3389/fphy.2022.929064
- V. Kain, N. Bruchon, S. Hirländer, N. Madysa, I. Vojskovic, P.K. Skowronski, G. Valentino: Test of Machine Learning at the Cern LINAC4. (2021) https://doi.org/https://doi.org/10.18429/JACoW-HB2021-TUEC4
- F. Kröger, G. Weber, S. Hirländer, R. Alemany–Fernández, M. W. Krasny, T. Stohlker, I. Tolstikhina, V. Shevelko: Charge-state distributions of highly charged lead ions at relativistic collision energies. (2021) https://doi.org/ https://doi.org/10.1002/andp.202100245
- N. Bruchon, G. Fenu, G. Gaio, S. Hirländer, M. Lonza, F.A. Pellegrino, E. Salvato: An Online Iterative Linear Quadratic Approach for a Satisfactory Working Point Attainment at FERMI. (2021) https://doi.org/https://doi.org/10.3390/info12070262
- V. Kain, S. Hirländer, B. Goddard, F.M.Velotti, G. Zevi Della Porta, N. Bruchon, G. Valentino: Sample-efficient reinforcement learning for CERN accelerator control. (2020) https://doi.org/ https://doi.org/10.1103/PhysRevAccelBeams.23.124801
- S. Hirländer, N. Bruchon: Model-free and Bayesian Ensembling Model-based Deep Reinforcement Learning for Particle Accelerator Control Demonstrated on the FERMI FEL. (2020) https://doi.org/ https://doi.org/10.48550/arXiv.2006.10330
