Expert Insight: An Introduction to the Method Lifecycle Management Approach

In this Q&A from our popular webinar, learn about an analytical procedure implemented by the pharmaceutical industry and how it can benefit your lab

18 Apr 2019

The pharmaceutical industry relies on data generated by analytical methods for many critical decisions. Traditional approaches consist of distinct exercises — method development, validation, transfer, verification — with limited understanding of the effect of variation on method performance.

The analytical Method Lifecycle Management (MLCM) approach is a holistic approach encompassing all activities, from method development to validation, routine use, change control and retirement of the method. This enhanced approach improves the method understanding and performance, leads to fewer out-of-specifications results (OOS), facilitates method transfer and has the potential to lessen the regulatory burden.

In one of our most popular webinars to date, now available on demand, experts from the USP and the pharmaceutical industry provide an overview of the MLCM approach and how it is being implemented.

Dr. Horacio Pappa, Director of General Chapters, United States Pharmacopeia, shares key concepts of MCLM and the current thinking of the USP, FDA and ICH, while Dr. Alexander Schmidt, Founder and CEO, and Mijo Stanic, Cofounder, General Manager and Technical Director, Chromicent Pharmaceutical Services, present three case studies of how the approach is already being implemented in a pharmaceutical research setting, utilizing UHPLC and convergence chromatography technology from Waters Corporation.

If you missed the webinar, don’t worry! You can now watch it on demand at a time that suits you. 

During the live webinar we received many questions from our audience on this trending topic. Read on for insights into the MCLM approach from Dr. Horacio Pappa, Dr. Alexander Schmidt, Mijo Stanic and Greg Martin, Vice Chair of General Chapters, United States Pharmacopeia and President, Complectors Consulting, LLC.

 

With regards to analytical target profile (ATP), do you think it's important to qualify the method and determine target measurement uncertainty (TMU) acceptance criteria at each step of the pharmaceutical development?

Dr. Horacio Pappa, Director of General Chapters, United States Pharmacopeia


 

HP: Keep in mind that the USP usually focuses on those methods that are used in the finalization of the development, i.e., when the product is already on the market. Having said that, the answer is usually no. You don't need to put a lot of effort into methods that are developed during the process of product development. It's the same pilot that we use now for classical validation. The big effort in the validation is when the method is already defined with the product already developed.

 

How do I define acceptable TMU for Class I (Assay) and Class II (Impurity) analytical procedures?

HP: This would depend on how critical the method is. I recommend reading this article.

 

ICH Q12 defines a concept called Established Conditions. How is this concept related to ATP? 

HP: Established Conditions is a regulatory term. At this point, it is outside the USP discussion. 

GM: What I can say is that the analytical target profile (ATP) will essentially establish some system suitability requirements. For example, an ATP for a degradant method might state: “The method is capable of separating the active drug from degradant A and degradant B with a resolution of NLT 1.5, with the ability to quantitate peaks at or above the reporting threshold of 0.10% of claim.”

  •  In this example, the system suitability test should include resolution requirements for active, deg A and deg B, and sensitivity requirements for a 0.10% std (probably signal-to-noise ratio of NLT 10).

I guess it could be interpreted that the MLCM establishes the system suitability requirements.

 

In Quality-by-Design (QbD) for methods, how do you define precision for samples and accuracy?

HP:  This is not just for Quality-by-Design, but also included in the approach we took in our new Chapter, <1210>, which proposes a theoretical method for procedure validation. We promote the use of the total error approach where you can have accuracy and precision all together as one parameter. So, no matter how high or low your precision or accuracy is, when you put them together, the total error is within what you define in your ATP.

 

How is QbD considered in the change permitted for the Chapter <621>?

HP: That's a good question. In the far future, when the methods are developed and validated using QbD concept, the information we will obtain at the pharmacopeia in terms of the performance of the method — the potential accuracy and precision of the method, meeting the appropriate system strategy requirements — will be very handy and of high level in the process of creating the monograph. 

So, in other words, all the adjustments that we proposed in <621> will not be needed anymore, because the monograph will contain all the information necessary for these acceptable adjustments in the method. The adjustments in <621> were introduced as a temporary solution, because of the lack of specific information on the design space of the method. 

 

If there are three laboratories in one firm (R&D and QC in two different locations), do you agree that methods validated out of house should be verified in all three laboratories, or, alternatively, if validated in R&D, they should be transferred and verified in both QC labs?

HP: I would say yes. The basic concept is that you will validate the method that is not in the pharmacopeia, as it's not been validated before; you verify the methods that are in the compendia; and you transfer the methods that are not in the compendia.

However, when you are moving compendial methods from one lab to the other, it's not a clear line between verification and transfer. What is important, I think, is that the verification activity is laboratory-dependent because, within the verification process, you try to demonstrate that the conditions where the method will be performed are suitable. And that includes not just the analytical metrics of your sample, but also the laboratory conditions.

 

It was mentioned that linearity quantification limit (QL) is part of the method development and that only precision and accuracy are needed for the validation. Does that mean that the official validation protocol would only include tests for precision and accuracy? And, what would be included in the final validation report needed for filing of the method?

HP: First, I need to clarify that the approach discussed today is an idea/model. There is nothing on the near horizon that indicates that you will have to change your filings or even the validation protocol that you have to submit to the pharmacopeia. 

What some groups believe is that when you define the total error in the range that the method needs to operate, through accuracy and precision, there is nothing else needed to define the performance of the method in that region. Things like linearity and understanding linearity as the response function of the method are secondary parameters of the main accuracy and precision combination. So, you define those during the method development based on your ATP and then the validation or the qualification, depending on what would be the final terminology, would be just accuracy and precision. 

Again, as I say in my presentation, this is a work in progress and not everybody agrees on the same concept, so I tried to present all the current ideas.

GM: Theoretically, all testing, such as linearity, robustness, etc. (as stated in ICH Q2) are performed as part of method understanding and development in stage 1.  So, when the method is ready to be in stage 2, i.e. validation, accuracy and precision are only needed to confirm the method is performing consistently.

 

What parameters should I consider for impurity method verification?

HP: In general, accuracy, precision and LoQ.

What should I consider when monitoring method performance during MLCM? What parameters should be measured and compared, and how often?

AS/MS: Please see USP chapter <1010> – Analytical Data, Interpretation and Treatment for details of which parameters can be used for monitoring.

GM: It depends on the method, but examples of parameters to trend are; resolution, RT, peak area, peak tailing, efficiency, peak resolution, peak response, system pressure.

 

Are draft versions of ICH Q2(R2) and ICH Q14 available?

HP: Not yet. The expectation is that the first readable version will be discussed at the next meeting in June in Amsterdam. And, after that, I think that you should expect to see the document on the ICH website in probably a couple of months.

 

In what time frame will USP <1220> be implemented?

HP: A new expert panel will start its work in April, with the goal of presenting in PF <1220> in 2019-2020.

 

Dr. Alexander Schmidt, Founder and CEO, Chromicent Pharmaceutical Services


 

For a newly developed method, how should method validation be performed with the MCLM approach?  What about compendial methods?  What about methods that have been validated but are being transferred?

AS/MS: Currently, USP chapters <1224>, <1225> and <1226> as well as the ICH Q2(R) guideline are valid and should be followed. A revision of Q2 is planned and will have details on how to proceed.

 

How would one apply QbD for very early stage method development projects?

AS/MS: Please see our publication “Lifecycle Management for analytical methods” in Journal of Pharmaceutical and Biomedical Analysis 147 (2018) 506–517 for details.

 

If we are following a vendor method for impurities, how do I find the limit of quantitation (LoQ)? 

AS/MS: Just make a solution of the API of 0.05% (or whatever your LoQ is) and inject. If the s/n is >1:10 the requirement of LoQ is fulfilled.

 

How does the continued method improvement fit with the requirement that every change in the validated method requires a risk assessment (RA) submission, which often impacts the supply chain for several months?

AS/MS: Please refer to the concept of Established Conditions and post-approval change management protocol of the ICH Q12 guideline which will guide you through this process.

Mijo Stanic, Cofounder, General Manager and Technical Director, Chromicent Pharmaceutical Services


 

 

What is your strategy to qualify for method operable design region (MODR)? The validation at each extreme point of the MODR is challenging for pharmaceutical companies.

MS: After we receive the design space, it's often only a model. We then perform verification points – the working point and six points around the working point. Sometimes we also perform verification points outside of the MODR or design space also, to check if the knowledge space shows not only where it's good, but also points where the method is not good. After the validation, we don't perform any robustness tests because we have performed them at the end of the development.

 

How do you design a MODR when the method is new? I.e., for an innovative drug that has no existing compendial monograph.

AS/MS: Design of Experiments will help you to define a method operable design region. From there, you would then choose a working point within the MODR and perform robustness testing.

 

Are you able to peak track unknown impurities/unknown masses, at the 0.05% level relative to the main peak with the Waters QDa Mass Detector?

MS: It depends, of course, on the analyte and the method. Usually, yes. But sometimes it also happens that higher peaks are not detectable with MS. They have to be ionized, and everything depends on their ionization.

 

What Analytical Quality by Design (AQbD) software do you use?

AS/MS: We use Fusion software, DryLab software, and Design Expert for AQbD.

 

Can you import chromatographic data directly into DryLab to account for accurate modeling of peak widths?

AS/MS: Yes, you can. Please export a chromatogram in AIA format from your CDS and import directly into Drylab.

 

Which modeling system do you recommend, DryLab or Fusion?

MS: That's a difficult question. It depends what you want to do. All software has its benefits. In DryLab, you can change many parameters, for example, radiant time, per cent start, per cent end. You can implement gradient steps, and you can change your column dimensions, the flow rate, the dwell volume. After you receive knowledge space and you can see what the chromatogram ran. In Fusion, you have the benefit that you can implement every result you receive in your chromatographic system. So, it doesn't matter if it is tailing factor; peak height, recovery, resolution. Overall, it depends on the user, and the focus.

 

How much total analyst or manager time was required to address the issues in the case studies presented?

AS: For the Avastin case study, we worked on it for two weeks. And for the pramipexole case study, it was a little bit quicker – I think it was done in two days or so. 

MS: For the third case study, I needed two weeks. All three cases studies presented are very easy. Sometimes, we need up to two months for very difficult method development.

 

Missed the webinar? Watch it on demand now>>

SelectScience runs 3-4 webinars a month across various scientific topics, discover more of our upcoming webinars>>