Pharma processes are among the more complex processes in the chemical industry. Over the past decade or so, the traditional “quality-by-testing” approach of trial-and-error is being consciously replaced, partly or fully, by a more scientific approach across all aspects of pharmaceutical engineering – discovery, development and manufacturing. This “Quality by Design (QbD)” paradigm – which is a risk and fundamentals-based approach to process development and transfer – makes it imperative that the entire development lifecycle be carried out in a systematic, science-based and quality risk-driven manner, so that the quality of the final product is no longer left to chance. A litany of tools and supporting technologies to support QbD have been identified – chief amongst them being Process Analytical Technology (PAT), statistical DoE, risk assessment tools and modeling/simulation.
The central tenet of QbD is “process understanding” – a term used by the US FDA to describe the degree of knowledge that an organization has acquired on its product and the pertinent manufacturing processes. According to guidance documents published by the USFDA, a system is considered well-understood when:
- sources of variability (in quality etc.) have been systematically identified and can be explained
- process variability is managed/controlled reliably
- process quality attributes can be accurately and reliably predicted over the process design/control space for the material and process conditions
Right – that does make sense. On to the next question – why should we care about process understanding?
We should care – a lot – because process understanding is not a nice-to-have. It is actually critical to achieve a state of operations where things are done right the first time, every time. In fact, with good process understanding, an organization stands to reap a number of impactful business benefits such as reduced time to market, better use of resources, elimination of wastage and perhaps most importantly, a no-surprises path to regulatory approval.
The articles and content on this site have one single focus – to provide more clarity on the what, why and how of pharma process understanding. They will focus on describing – in clear terms – how to go about implementing methods that will enable better process understanding for the development and transfer of drug manufacturing technology.
The methods introduced in this blog are based on a combination of computational/first-principles modeling and data analytics. Further, they help in separating the scale-dependent aspects of a unit operation from its scale-independent aspects. As a result, using the methods described here, an engineer can use data from laboratory experiments – typically collected in small scale equipment – to confidently scale up to manufacturing to support clinical trials and/or to larger scales.
As we will describe in further posts, the scale-dependent aspects of a unit operation can be quantified conveniently using computational/first-principles models. This step in the hybrid approach, dubbed “asset characterization” results in a “digital nameplate” for the process asset under consideration, such as a crystallizer, a bioreactor, a tablet press or a granulator. Similar to a physical nameplate, the digital nameplate provides details on the capabilities of the asset or “process metrics” – in terms of expected mixing times, heat transfer capabilities, gas-liquid mass transfer effectiveness etc. – as a function of process conditions and material properties.
The scale-independent aspects of a unit operation are, in a similar manner, captured by the so-called “process signatures”. Process signatures combine the experimental data and asset characterization information to create relationships between process metrics and quality attributes which are true for any scale. They are based on the notion that as long as the processing “micro”-environment remains the same, no matter what the shape and size of the vessel, the end product quality is bound to remain the same.
Combining digital nameplates and process signatures offers a powerful way in which information from lab data can be appropriately contextualized and reused for scaling up and technology transfer. In the next post in this series, we will look at how asset characterization and digital nameplating of process assets can be achieved using modeling tools.
In the subsequent posts, we show how such digital nameplates of assets – or DNA (!) – can be used with experimental data to create signatures of processes using examples chosen from different processes across the pharma and biopharma process development, transfer and qualification landscape.
– Mothivel Mummudi Boopathy