Diversity oriented Deep Reinforcement Learning for targeted molecule generation
Abstract In this work, we explore the potential of deep learning to streamline the process of identifying new potential drugs through the computational generation of molecules with interesting biological properties. Two deep neural networks compose our targeted generation framework: the Generator, w...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
BMC
2021-03-01
|
Series: | Journal of Cheminformatics |
Subjects: | |
Online Access: | https://doi.org/10.1186/s13321-021-00498-z |
id |
doaj-7d1c87b5e54d42c69852b408076337ee |
---|---|
record_format |
Article |
spelling |
doaj-7d1c87b5e54d42c69852b408076337ee2021-03-11T12:42:14ZengBMCJournal of Cheminformatics1758-29462021-03-0113111710.1186/s13321-021-00498-zDiversity oriented Deep Reinforcement Learning for targeted molecule generationTiago Pereira0Maryam Abbasi1Bernardete Ribeiro2Joel P. Arrais3Department of Informatics Engineering, Centre for Informatics and Systems of the University of Coimbra, University of CoimbraDepartment of Informatics Engineering, Centre for Informatics and Systems of the University of Coimbra, University of CoimbraDepartment of Informatics Engineering, Centre for Informatics and Systems of the University of Coimbra, University of CoimbraDepartment of Informatics Engineering, Centre for Informatics and Systems of the University of Coimbra, University of CoimbraAbstract In this work, we explore the potential of deep learning to streamline the process of identifying new potential drugs through the computational generation of molecules with interesting biological properties. Two deep neural networks compose our targeted generation framework: the Generator, which is trained to learn the building rules of valid molecules employing SMILES strings notation, and the Predictor which evaluates the newly generated compounds by predicting their affinity for the desired target. Then, the Generator is optimized through Reinforcement Learning to produce molecules with bespoken properties. The innovation of this approach is the exploratory strategy applied during the reinforcement training process that seeks to add novelty to the generated compounds. This training strategy employs two Generators interchangeably to sample new SMILES: the initially trained model that will remain fixed and a copy of the previous one that will be updated during the training to uncover the most promising molecules. The evolution of the reward assigned by the Predictor determines how often each one is employed to select the next token of the molecule. This strategy establishes a compromise between the need to acquire more information about the chemical space and the need to sample new molecules, with the experience gained so far. To demonstrate the effectiveness of the method, the Generator is trained to design molecules with an optimized coefficient of partition and also high inhibitory power against the Adenosine $$A_{2A}$$ A 2 A and $$\kappa$$ κ opioid receptors. The results reveal that the model can effectively adjust the newly generated molecules towards the wanted direction. More importantly, it was possible to find promising sets of unique and diverse molecules, which was the main purpose of the newly implemented strategy.https://doi.org/10.1186/s13321-021-00498-zDrug DesignSMILESReinforcement LearningRNN |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Tiago Pereira Maryam Abbasi Bernardete Ribeiro Joel P. Arrais |
spellingShingle |
Tiago Pereira Maryam Abbasi Bernardete Ribeiro Joel P. Arrais Diversity oriented Deep Reinforcement Learning for targeted molecule generation Journal of Cheminformatics Drug Design SMILES Reinforcement Learning RNN |
author_facet |
Tiago Pereira Maryam Abbasi Bernardete Ribeiro Joel P. Arrais |
author_sort |
Tiago Pereira |
title |
Diversity oriented Deep Reinforcement Learning for targeted molecule generation |
title_short |
Diversity oriented Deep Reinforcement Learning for targeted molecule generation |
title_full |
Diversity oriented Deep Reinforcement Learning for targeted molecule generation |
title_fullStr |
Diversity oriented Deep Reinforcement Learning for targeted molecule generation |
title_full_unstemmed |
Diversity oriented Deep Reinforcement Learning for targeted molecule generation |
title_sort |
diversity oriented deep reinforcement learning for targeted molecule generation |
publisher |
BMC |
series |
Journal of Cheminformatics |
issn |
1758-2946 |
publishDate |
2021-03-01 |
description |
Abstract In this work, we explore the potential of deep learning to streamline the process of identifying new potential drugs through the computational generation of molecules with interesting biological properties. Two deep neural networks compose our targeted generation framework: the Generator, which is trained to learn the building rules of valid molecules employing SMILES strings notation, and the Predictor which evaluates the newly generated compounds by predicting their affinity for the desired target. Then, the Generator is optimized through Reinforcement Learning to produce molecules with bespoken properties. The innovation of this approach is the exploratory strategy applied during the reinforcement training process that seeks to add novelty to the generated compounds. This training strategy employs two Generators interchangeably to sample new SMILES: the initially trained model that will remain fixed and a copy of the previous one that will be updated during the training to uncover the most promising molecules. The evolution of the reward assigned by the Predictor determines how often each one is employed to select the next token of the molecule. This strategy establishes a compromise between the need to acquire more information about the chemical space and the need to sample new molecules, with the experience gained so far. To demonstrate the effectiveness of the method, the Generator is trained to design molecules with an optimized coefficient of partition and also high inhibitory power against the Adenosine $$A_{2A}$$ A 2 A and $$\kappa$$ κ opioid receptors. The results reveal that the model can effectively adjust the newly generated molecules towards the wanted direction. More importantly, it was possible to find promising sets of unique and diverse molecules, which was the main purpose of the newly implemented strategy. |
topic |
Drug Design SMILES Reinforcement Learning RNN |
url |
https://doi.org/10.1186/s13321-021-00498-z |
work_keys_str_mv |
AT tiagopereira diversityorienteddeepreinforcementlearningfortargetedmoleculegeneration AT maryamabbasi diversityorienteddeepreinforcementlearningfortargetedmoleculegeneration AT bernardeteribeiro diversityorienteddeepreinforcementlearningfortargetedmoleculegeneration AT joelparrais diversityorienteddeepreinforcementlearningfortargetedmoleculegeneration |
_version_ |
1724224154052329472 |