Professor of Artificial Intelligence, University of Bath, UK
Professor of Artificial Intelligence, Deputy Head of Department, Department of Computer Science
Vita
Özgür Şimşek ist stellvertretende Leiterin des Fachbereichs Informatik, wo sie die Forschungsgruppe Künstliche Intelligenz leitet. Von 2018 bis 2020 war sie stellvertretende Direktorin am IMI. Bevor sie 2017 an die University of Bath kam, war Özgür Şimşek wissenschaftliche Mitarbeiterin am Center for Adaptive Behaviour and Cognition am Max-Planck-Institut für menschliches Verhalten in Berlin, Deutschland. Sie promovierte 2008 in Informatik an der University of Massachusetts Amherst. Özgürs Forschungsarbeiten umfassen ein breites Spektrum an Bereichen des maschinellen Lernens, darunter Verstärkungslernen, überwachtes Lernen, Lernen aus kleinen Datensätzen und begrenzte Rationalität. Im Rahmen ihres IMI-Stipendiums wird Özgür zusammen mit den IMI-Stipendiaten Dr. Will Tillett und Dr. Raj Sengupta Anwendungen des Verstärkungslernens im Gesundheitswesen entwickeln.
- Since 2022: Professor, Department of Computer Science, University of Bath, Bath, United Kingdom
- 2017 – 2022: Senior Lecturer, Department of Computer Science, University of Bath, Bath, United Kingdom
- 2018 – Present: Deputy Director, Institute for Mathematical Innovation, University of Bath, Bath, United Kingdom
- 2011 - 2017: Research Scientist, Center for Adaptive Behavior and Cognition, Max Planck Institute for Human Development, Berlin, Germany
- 2008 - 2011: Postdoctoral research fellow, Center for Adaptive Behavior and Cognition, Max Planck Institute for Human Development, Berlin, Germany
- 2008: PhD in Computer Science, University of Massachusetts, Amherst, Massachusetts
- 2004: MSc in Computer Science, University of Massachusetts, Amherst, Massachusetts
- 1997-2000: Research Scientist, Human Factors Transportation Research Center, Battelle Memorial Institute, Seattle, Washington
- 1997: MSc in Industrial Engineering and Operations Research, University of Massachusetts, Amherst, USA
- 1995: BSc in Industrial Engineering, Boğaziçi Universitesi, Istanbul, Turkey
Ausgewählte Veröffentlichungen
Lichtenberg, J. M. , & Şimşek, Ö. (2019). Regularization in directable environments with application to Tetris. In Proceedings of the Thirty-Sixth International Conference on Machine Learning (ICML).
Şimşek, Ö., Algorta, S., & Kothiyal, A. (2016). Why Most Decisions Are Easy in Tetris—And Perhaps in Other Sequential Decision Problems, As Well. In Proceedings of the Thirty-Third International Conference on Machine Learning (ICML).
Şimşek, Ö., & Buckmann, M. (2015). Learning from small samples: An analysis of simple decision heuristics. In Advances in Neural Information Processing Systems (NIPS) 28.
Şimşek, Ö. (2013). Linear decision rule as aspiration for simple decision heuristics. In Advances in Neural Information Processing Systems (NIPS) 26.
Şimşek, Ö., & Jensen, D. (2008). Navigating networks by using homophily and degree. Proceedings of the National Academy of Sciences (PNAS), 105(35), pp. 12758–12762.
Şimşek, Ö., & Barto, A. G. (2008). Skill characterization based on betweenness. In Advances in Neural Information Processing Systems (NIPS) 21.
Şimşek, Ö., & Barto, A. G. (2006). An intrinsic reward mechanism for efficient exploration. In Proceedings of the Twenty-Third International Conference on Machine Learning (ICML).
Şimşek, Ö., Wolfe, A. P., & Barto, A. G. (2005) Identifying useful subgoals in reinforcement learning by local graph partitioning. In Proceedings of the Twenty-Second International Conference on Machine Learning (ICML).
Neville, J., Şimşek, Ö., Jensen, D., Komoroske, J., Palmer, K., & Goldberg, H. (2005). Using relational knowledge discovery to prevent securities fraud. In Proceedings of the Eleventh International Conference on Knowledge Discovery and Data Mining (KDD).
Katsikopoulos, K. V., & Şimşek, Ö. (2005). Optimal doubling strategy against a suboptimal opponent. Journal of Applied Probability, 42, 867–872.
Şimşek, Ö., & Barto, A. G. (2004). Using relative novelty to identify useful temporal abstractions in reinforcement learning. In Proceedings of the Twenty-First International Conference on Machine Learning (ICML). ACM, New York, NY, USA.