From microorganisms to plants and mammals, cellular metabolism plays a pivotal role in cellular processes by providing building blocks and energy for biosynthesis, enabling adaptive responses, and driving differentiation. A straightforward approach to profile large cohorts with 10,000s of samples using mass spectrometry unlocks novel, fascinating opportunities.
In a SelectScience webinar, now available on demand, Nicola Zamboni, Ph.D. discusses how he and his research group at the Institute for Molecular System Biology, ETH Zurich, use mass spectrometry to investigate metabolic networks and how it has been utilized in research for more than two decades, for its speed, sensitivity, robustness, coverage, and flexibility.
Read on for the highlights from the webinar Q&A session below.
NZ: The main advantage is the dynamic range. With the new instruments, you get a much wider dynamic range, within the same scan. And for our kind of application, this is crucial because you can have highly abundant, very rare features coeluting, and you still could quantify them very precisely. That is a big advantage. On top of that, you get better resolution. Overall, it's a mix of increased resolution, increased dynamic range that allows us to measure the samples very quickly and be able to report all these kinds of features reproducibly.
NZ: We don’t; the problem is that without chromatography, we can only annotate the formula. We cannot annotate which structural isomer is this. The same formula might map to similar compounds. You can think citrate and isocitrate, which have the same formula. Without chromatography, we cannot distinguish between those two and all their peers.
There are two ways of approaching this. The first one, the more complex one, is trying to do bootstrapping. Since we have been doing this network integration, we’ve played with all kinds of scenarios. We might propagate that measurement to one of the alternatives, to another one, or to the combination and so on. The idea is that if we propagate and we test all kinds of combinations, from the network context, we might still be able to find out some hotspots, some regions where we still have a lot of changes.
We did this in the early days - nowadays, we don't do it anymore. It turned out to be an overkill. What we do is we take the measurements and we propagate this to all the possible identities. And since we use this to make sense of the data, it typically turns out that you might have one change in the pathway or nothing actually changing, versus a change where a metabolite in the pathway with many of the neighbors is also changing. From the pathway context, we find out what is the most likely identity of this compound.
So nowadays we just propagate the same measurement to all the possible metabolites from the context that we get. Since we get many metabolites changing in the same time in the same pathway, we basically get the network integration done without having to select beforehand.
NZ: Rarely. The reason is mostly the pathway definition is very arbitrary. Biochemistry doesn't know what a pathway is. A pathway is a phrase that came from decades of investigations. It could be that some metabolites in the pathway change. But if you use a classical pathway definition, these might not be popping up because some orders are not. This scenario depends mostly on the biochemistry and how those pathways tend to work. We rarely use pathway enrichment because it's not really informative or it doesn't give us much more information on top of projecting those measurements straight on the topology, on the graph, on the network, and finding out those hotspots that we do, without having to have a distinct predefined pathway and starting point.
Sometimes you're lucky, those will work. But having the network and the full graph of the path enrichment becomes a futile exercise, it's not relevant.
NZ: We separate them. We separate them because these kinds of measurements, particularly high throughput, they work nicely if the system is set up in a way that's very robust, which is the key aspect. Switching between positive, negative usually starts disturbing the spray, the master can handle that, but the main problem is really the spray. If you do high throughput with dirty samples, you're also going to a very dirty source. In order to keep the system very robust and still get very quantitative data, we run all the samples in negative ionization first and then run them in positive.
In the early days, we were doing both modes. Nowadays, for several reasons, we focus on negative because the compounds we look at tend to be very good anions. Examples of these anions are organic acids, nucleotides or phospholipids. They organize very nicely in the negative mode. And in positive, you more frequently see adducts. 99% of the starters we do, we run only negative mode. Some of them where we have some compounds that we really care about that are polyamines and so on, we also acquire in the positive. But it's always separated. We don't mix polarity switch in the same run.
NZ: Well, yes, there are a few. Usually, they're not so simple, in the sense that you must do statistics. They take a bit of time, you cannot provide a result within a few seconds. However, there are some approaches that are simpler and may not be so sophisticated. One which is well recognized and used is called Mummichog. I think it's also implemented in Metlin and some of the older community for metabolomics analysis.
I also think that the MetaboAnalyst, which is driven by David Wishart in Canada on the human metabolome and so on. They also have implemented some aspects of that. That’s what I would recommend. There are some others like the CoMBI-T, which is something that originally came in a paper in 2015 on immunology. The mastermind behind that is Maxim Artyomov. There is also a web service named GAM where you can just upload your data. You can even have trans-proteomics data, proteomics data on some samples. You can find out those sub-graphs of the network that are heavily regulated and the computation is very simple. It's a web server. You don't really need to do anything other than providing your data and you get the result on what is the sub-network, which is mostly regulated.
These are the two simple ones. For more complex things, it requires some mathematical knowledge to go through those. And for that, it's worth going to literature and searching specifically.
NZ: The short answer is yes. We don't really do that because what we do is mostly based on running a lot of samples and that would give us relative differences. This is enough for us to generate hypotheses.
But the use of flow injection for quantitative analysis is not new. What you need to do is have an isotopic standard, an internal standard, which can be deuterated 13C and it's co-injected. You're spiking a known amount, it's co-injected. And then the instruments used are linear and quantitative enough so you can quantify the compound for which you have a standard. There are also commercial vendors like Biocrates, that sell kits for all kinds of mass specs that are quantitative. They provide very nice data. There are very nice papers where it's reproduced across labs because they use these internal standards. Clearly, it only works for compounds for which a standard is available. But, in principle, it's possible. Flow injection is not per se a problem or the light chromatography is not an issue. You just need to have the linearity of the detector, of the mass spec, and you need to have an internal standard to be able to have this internal normalization.
Find out more on this topic by watching the full webinar on demand>>
SelectScience runs 3-4 webinars a month across various scientific topics, discover more of our upcoming webinars>>