Network Deconvolution -FAQs
What is the difference between ND.m and ND_regulatory.m?
ND.m is for symmetric square input matrices. ND_regulatory.m is for asymmetric regulatory networks where edges are from TFs to target genes. We have shown that if the TF-TF network is symmetric, the eigenvalues of the asymmetric regulatory network are real and network deconvolution can be used.
How large can input networks be?
It depends on your memory and CPU. The theoretical computational complexity of the algorithm is n^3 where n is the number of nodes in the network. We have proposed two variations of ND that make it scalable to very large networks: the first exploits the sparsity of eigenvalues of low rank networks (Supplementary Note 1.7.1), and the second parallelizes network deconvolution over potentially-overlapping subgraphs of the network (Supplementary Note 1.7.2). Please contact us if you need more details.
Can I use ND over an asymmetric input matrix?
If your asymmetric network has a similar structure to regulatory networks (having symmetric TF-TF sub-network), you can use ND_regulatory.m. For a general asymmetric network, you need to solve an optimization problem formulated in Supplementary Note 1.4.
What are the differences between theoretical ND and ND’s practical implementation?
There are some differences between the theoretical ND framework (depicted in Figure 1 of the paper) and ND’s practical implementation. Specifically, in the practical implementation of ND:
(i) Input networks are made symmetric. In the ND_regulatory.m only the TF-TF network is made symmetric. This step is to guarantee having real eigenvalues.
(ii) Self-loops are ignored (the diagonal of the observed network is set to zero).
(iii) The observed network is linearly scaled according to equation (12) in the Supplementary Notes so that the maximum absolute eigenvalue of the direct network is guaranteed to be strictly less than one. This is necessary to guarantee the convergence of the Taylor series in ND modeling.
When I run ND, the output network is not sparse. why?
The outputs of ND.m and ND_regulatory.m are not necessarily sparser than the input network. The output provides “direct weights” for both observed and un-observed interactions. Depending on the application, a cut-off can be used to consider top scoring edges, or only weights of observed interactions may be desired.
Does ND infer un-observed interactions as well?
Network deconvolution provides direct weights for both observed and non-observed interactions but their use depends on the application. In our paper, we focused on weights for the subset of observed edges, as our goal was to deconvolve indirect effects on the existing interactions.The provided code (ND.m) has the option to only report deconvolved weights for observed interactions, or to report deconvolved weights for all possible interactions, both observed and unobserved.
Does ND infer partial correlations?
Network deconvolution weights are different than partial correlation weights as the two methods consider different models to infer direct weights. However, there are some parallels between them as Alipanahi and Frey point out in their News and Views article. See their article here.
What are alpha and beta parameters in the code?
Alpha sets the density of the input network to be deconvolved. Beta sets an upper bound on the maximum absolute eigenvalue of the direct matrix. Beta should be between 0 and 1 to have theoretical convergence of the method. Indirect effects over paths decay exponentially with respect to beta. If beta is small, higher order interactions play smaller roles in observed dependencies since they decay exponentially as a function of beta.
Why are the output values between 0 and 1?
At the last step, we linearly map values to be between zero and one since in our applications, only the relative ranks of direct interactions matter. If you are interested in un-scaled output values, comment out the last lines of the provided code.
What is the diagonalizability condition?
A matrix is diagonalizable if it can be decomposed to its eigenvalues and eigenvectors. Some asymmetric networks are not diagonalizable due to perfectly aligned sub-spaces. In ND_regulatory.m, we added a control to check the diagonalizability condition of the input matrix. If it is not diagonalizable, the code perturbs the input network slightly to erase perfectly aligned sub-spaces. If it is still non-diagonalizable, MATLAB will display a warning message.
What is the effect of linear scaling of input matrix over ND?
If the input matrix is non-negative (with the minimum zero), a linear scaling of the input network results in a similar scaling of its eigenvalues, which is normalized in the ND algorithm and therefore, mathematically, linear scaling should not have an effect. Indeed, in practice, we have run ND with and without the linear scaling, and the results are indistinguishable (see results here).
Is the eigenvalue scaling step necessary?
Yes, the eigenvalue scaling is necessary both theoretically, to guarantee the convergence of the Taylor series, and practically, as performance decreases without it in practice. As an illustration, we ran ND without eigenvalue scaling in the DREAM5 network benchmakrs, and show that it consistently performs worse (see plots here).
Your correction notice of February 13 mentions that S4 corrected a plotting error. Does this affect your results?
As we state in the correction notice, the original panel (a) had an incorrect x axis, resulting in a ~5% difference in the plots, which does not affect any of our conclusions. For panel b, the original version showed performance for the residue network defined at 11 angstroms instead of 7 anstroms. The change does not affect any of our results or conclusions (see an overlay of the erroneous and correct figure here).
Why did you use beta=0.5 for the regulatory network when in fact you recommend using beta as high as possible?
The beta parameter corresponds to the expected scaling of indirect effects at each round of the Taylor series (see section 1.6 in our supplementary information). For the regulatory network, we used beta=0.5, corresponding to a relatively shorter propagation (faster decay) of indirect effects through the network. DREAM5 is a blind competition and we selected the ND parameters in advance, to have a fair comparison with other methods. In retrospect, we could have chosen beta=0.9 or 0.99 with little difference (~5%) in performance (see plots here). The beta-close-to-1 recommendation was made after seeing the performance curves for all three networks in Figure S4, but we selected beta for the DREAM competition prior to making Figure S4, and we kept our original parameters unchanged.
Why don't you use cross-validation to set the scaling parameter in the regulatory network.
In practice, one can choose the beta parameter using cross-validation, by separating the data in a training and a testing set, using the training set to select a good beta value, and evaluating performance in the test set using that beta value. This works well for ND in the regulatory network (see plots here). However, since DREAM is a blind competition, cross-validation was not appropriate, and we selected the beta parameter in advance of generating figure S4. In practice however, the value of beta made little difference (~5%) between 0.5 and 0.99 (see plots here).
I would like to reproduce your results. Can i use your code, datasets, and evaluation methods?
All code and datasets are publicly available, and we have added an evaluation page here: Try it out!
|