[PlanetKR] Research Assistant/Research Associate in Safe Reinforcement Learning through Formal Methods, Imperial College London, Full-time, Fixed term to start October 2023 for 24 months
Francesco Belardinelli
francesco.belardinelli at univ-evry.fr
Tue Apr 18 12:21:12 UTC 2023
This post of Research Assistant/Associate (post-doctoral) is to conduct
research on /safe reinforcement learning through formal methods//,
/under the direction of Dr Francesco Belardinelli, within the EPSRC New
Investigator Award /An Abstraction-based Technique for Safe
Reinforcement Learning/.
Autonomous agents learning to act in unknown environments have been
attracting research interest due to their wider implications for AI, as
well as for their applications in key domains, including robotics,
network optimisation, resource allocation. Currently, one of the most
successful approaches is reinforcement learning (RL). However, to learn
how to act, agents are required to explore the environment, which in
safety-critical scenarios means that they might take dangerous actions,
possibly harming themselves or even putting human lives at risk.
The main goal of this project is to develop Safe through Abstraction
(multi-agent) Reinforcement learning (StAR), a framework to formally
guarantee the safe behaviour of agents learning to act in unknown
environments, through the satisfaction of safety constraints by the
policies synthesized through RL, both at training and test time. We aim
at combining RL and formal methods to ensure the satisfaction of
constraints expressed in (probabilistic) temporal logic (PTL) in
multi-agent environments.
The successful applicant will join the Formal Methods in AI (FMAI)
research group, led by Dr Belardinelli. For further information on the
group and related projects, see: https://www.doc.ic.ac.uk/~fbelard/).
The position offers an exciting opportunity for conducting
internationally leading and impactful research in safe reinforcement
learning. The postholder will be responsible for researching and
delivering abstraction-based methods to guarantee the safe and
trustworthy behaviour of autonomous agents based on the most widely used
RL algorithms. They will also be expected to submit publications to
top-tier conferences and journals in AI.//
To apply, you must have a strong computer science background with a
focus on AI, have experience, including a proven publication
track-record, in at least two of the following areas, as well as ability
and willingness to become familiar with the other: /Logic-based
languages and formal methods; Formal verification, including model
checking; (safe) Reinforcement/. You should also have://
* Research Assistant: A Master’s degree (or equivalent) in computer
science or a related area.
* Research Associate: A PhD degree (or equivalent) in computer science
or a related area.
* Familiarity with /standard reinforcement learning libraries/data
analysis/.
* Excellent communication skills and ability to work with others.
* Ability to organise your own work and set priorities to meet deadlines.
*This position is full-time, fixed term, to start October 2023 for 24
months*
*To apply*
Visit https://www.imperial.ac.uk/jobs/ and search using reference
ENG02573. In addition to completing the online application, candidates
should attach:
* A full CV, with a list of all publications
* A 2-page research statement indicating what you see are interesting
research issues relating to the above post and why your expertise is
relevant.
Informal enquiries related to the position should be directed to Dr.
Francesco Belardinelli: francesco.belardinelli at imperial.ac.uk.
For queries regarding the application process contact Jamie Perrins:
j.perrins at imperial.ac.uk.
*Closing Date: 31 May 2023 (midnight)*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://kr.org/pipermail/planetkr/attachments/20230418/a9ce6635/attachment-0001.htm>
More information about the PlanetKR
mailing list