The Canadian Journal of Chemical Engineering, Volume 99, Issue 6, June 2021, Pages 1288-1306

Abstract

The conventional one‐at‐a‐time strategy to evaluate catalysts is inefficient and resource intensive. Even a fractional factorial design takes weeks to control for temperature, pressure, composition, and stability. Furthermore, quantifying day‐to‐day variability and data quality exacerbates the time sink. High‐throughput catalyst testing (HTCT) with as many as 64 parallel reactors reduces experimental time by two orders of magnitude and decreases the variance, as it is capable of quantifying random errors. This approach to heterogeneous catalyst development requires dosing each reactor precisely with the same flow and composition, controlling the temperature and identifying the isothermal zone, on‐line analysis of the gas‐phase, and a common back pressure regulator to maintain constant pressure. Silica capillary or microfluidic distribution chip manifolds split a common feed stream precisely (standard deviation <0.5%), thus guaranteeing reproducibility. With these systems, experimenters shift their focus from operating a single reactor to careful catalyst synthesis, sometimes delicate catalytic reactor loadings, and the handling, processing, and analysis of massive amounts of data. This review presents an overview of the main elements of HTCT, its applications, potential sources of uncertainty, and a set of best practises derived from scientific literature and research experience.