Zoom Logo

18.418 Spring 2021 - Shared screen with speaker view
Stephen Barnes
Trey, do you take into account that drugs are differentially metabolized by individual cell lines?
Manolis Kellis
Question (for the end): why not let the system learn the embedding given the task at hand, rather than rely on curated or independently-learned hierarchical structures?
Manolis Kellis
For the end: Could you comment on the activation functions of the neural network? Are there non-linearities involved? Activation functions? Or is it just accumulating evidence and adding it up hierarchically for each pathway?
Karthik Nair
question for later: are cross-validations here split by cell lines, or (drug, cell line) pairs?
Bonnie Berger
I'm saving questions in the chat for the end so no worries.
Alex Barbera
question for later: given that the hierarchy is built with knowledge of the pathways, wouldnt it be more interpretable to use something like random forest which is inherently more interpretable?
Ben Lengerich
(question for later): Is it possible to assign certainty/uncertainty to interpretations and learned connections in the VNN?
Can there also be improvements by incorporating more information about types of mutations (not all mutations are created equal and I’m curious how that is handled and if it could be improved)?
Mina Sadat Mahmoodi
question: Can you explain how significant the curves in the last page are? Does the slight difference between red and blue curve have high biological importance?
Dadi Gao
Two questions: 1. If the GO structure is shuffled randomly, can the model still reach a similar performance? 2. Is there a way to compare the contribution from the mutation and the contribution from the drug in a mutation-drug pair? Biologically concern is: a drug could be so potent that kills any cell anyway.
Chris Sander
Q: how does elastic net regression do on the everolimus plot - and how good is interpretability for it?
Rohit Singh
Both the genotype and drug representation were finally in a 6-dimensional space, if my understanding is correct? Do the two representations co-embed well together? What was the architecture for the final prediction part that took these embeddings as input?