Current Research

To be updated…

June 2017: I’m currently working on improving the state-of-the-art decision making agents for non-stationary and stochastic environments by optimizing the exploration-exploitation trade-off with a hybrid adaptive algorithm, using Bayesian theory combined with a meta-learning biologically plausible method (see publications section for a first introduction).

As the performance of these hybrid models on decision making have exhibited promising results, my future plans include the enhancement of these models for more complicated environments by using optimal behavioral hierarchy techniques (Soloway ’14, Botvinick ’14). Hierarchical techniques could also be applied for optimally clustering the unknown environment into sub-spaces (Oudeyer ’16), and my goal at first would be to empirically recognize the best tuple of decision agent-parameter vector for each estimated subspace.

Connecting my work with neuroscience research, my future plans also include the implementation of a multi-level biologically plausible set of adapting decision agents that will constitute the dynamically changing short term and long term memory of BabyRobot (EU-Horizon 2020 project), introducing a learning strategy which has shown to be correlated with the activity of the prefrontal cortex (PFC) in humans (Donoso ’14, Collins ’12). In this way I plan to add significant cognitive components to BabyRobot and my last goal would be to collaborate with neuroscience colleagues in order to find new valuable underlying correlations and relationships between the hyper-parameter values and reasoning/intelligence.

Keywords: non-stationary stochastic multi armed bandits, exploration-exploitation, regret minimization, reinforcement learning, meta-learning, computational neuroscience, cognitive robotics.

Feel free to send me a message for any interesting ideas regarding the above fields!