The previous articles in this series presented ideas related to starting with a good Process Concept to drive the scale-up effort (‘Start with the Process Concept’), with Creative Process Engineering serving as one key aspect to this approach.
We draw on Modeling and Analysis as a second key element: to set targets for economic and sustainability performance, encapsulate experimental data into engineering models, and design process equipment. However, it is critical to recognize the limitations of models. British statistician George Box liked to say that all models are wrong, but some are useful[i]. For our purposes, models should be useful tools to support process development, scale-up, and design, rather than exact replications of the system in question. To carry the analogy further, we need an entire toolbox at our disposal, and to make sure that we have the right tools for the right job.
I typically like to start off with something simple and build out detail from there. A simple mass balance using a spreadsheet is a great place to start! We can then add additional detail to this simple model, and develop additional types of models depending on the requirements. Examples of additional types of useful models include:
· Kinetic models for chemical and biological reaction systems.
· Reactor design models for common reactor types, such as packed bed, trickle flow, fluidized bed, and external loop.
· Phase equilibrium models to support design of separation systems
· Life Cycle Analysis models for sustainability analysis.
· Technoeconomic models for economic analysis.
· Process simulation models for flowsheet and equipment design.
The level of detail needed is driven by the requirements of the task at hand.
Where data does not exist, or is inconclusive, assumptions can be used to establish a working model. We can then evaluate how critical those assumptions are to the system in question by exploring sensitivities. If the answer is ‘very critical’, this result can be used to inform upcoming experimental activities. This interplay between engineering design, modeling, and experimentation is quite important. When modeling is done in a vacuum, with little or no interaction with experimentalists, the results is often a very beautiful model with limited value. Similarly, some experimentalists insist it is impossible to model their system and find no value in the results that are spit out by an egghead running a spreadsheet. The reality is that a useful model can, and should, complement experimentation to reduce the time and cost of scale-up, providing insight as to when additional data is needed to enhance understanding. A great model can also produce results and understanding that may be too time consuming, costly, or just not possible through additional experimentation. The models can also direct future opportunities for experimental programs.
The models should then be refined as more data is collected—this is not ‘set and forget’. This data should be generated at multiple scales to enhance the robustness and utility of the model. The final article in this series will dive deeper into this critical issue of experimental data.
[i] Box, G. E. P. (1979), "Robustness in the strategy of scientific model building", in Launer, R. L.; Wilkinson, G. N., Robustness in Statistics, Academic Press, pp. 201–236.