PQMS
  • Homepage
  • Products
  • About Us
  • Blog
  • Contact us
Login
Book demo
  • One-Size-Fits-All Paperless Validation – Is It Really Fit for Use?

    One-Size-Fits-All Paperless Validation – Is It Really Fit for Use?

    Synopsis:

    Popular  validation tools have a basic flaw – these applications were designed to only carry out qualification activities.  This article emphasises the differences between qualification and validation, and why for process and cleaning validation/monitoring, such “paperless validation” applications offer little more value than a document management system. An efficient digital validation platform has to be a suite of applications each performing what it has to be designed for, qualification, manufacturing process management, cleaning process management separately,  but still integrated so as to provide a smooth flow of data. 

    Introduction

    No manufacturing sector can survive without a relentless focus on quality—because the very essence of manufacturing is to produce consistent and standardized product.  Any lapse in quality in pharmaceutical industry leads to regulatory scrutiny, fines, damaged reputations, and ultimately business failure. 

    In pharmaceutical manufacturing, the regulatory expectation is that the process be in “a state of control”.  And historically we “validate” processes, e.g., manufacturing, packaging, cleaning or QC method.  The core objective of validation is to demonstrate document consistent, controlled performance over time.  Validation is more than a regulatory requirement—it is the foundation of operational consistency, product quality and most importantly.

    As digital “validation” solutions flood the market, the expectation is that these digital tools do more than merely replicating paper processes in electronic form – that they will enable smarter, more integrated validation approaches that align with regulatory requirements and operational needs. But do they deliver on this promise?

    This article explores the differences between validation and qualification, and why the “one-size-fits-all” approach towards  “validation” is an idea that falls well short of customer expectations. 

    Let’s Clarify the Basics

    Before delving deeper, it is crucial to differentiate between the two terms that are often confused: qualification and validation.

    • The expectation from Qualification is that the equipment, utilities, instruments, software used in a GMP environment have been installed correctly and function as intended.
    • Expectation from Validation is to show that the processes are “in a state of control”. 

    In simple terms, Qualification answers, “Does that entity work as per our requirement?” while Validation dives deeper to explore, “Can we get a reliable and consistent product?”

    Understanding Qualification

    The User Requirements Specification (URS) is usually the first document created when a new entity/asset is being planned, detailing the requirements, including business, compliance, and operational requirements.  This activity is done by stakeholders who are either responsible or accountable for that asset/entity.

    The selected vendor(s), or the internal resource where applicable, respond with a Functional Specification (FS) detailing how the requirements in the User requirement will be met.

    The specific test script document requirements vary depending on what is being qualified. However, the intent remains the same: to demonstrate, through test scripts, that the User Requirement Specification (URS) requirements have been met. These test scripts may be documented in Factory Acceptance Tests (FAT), Installation Qualification (IQ), Operational Qualification (OQ), Site Acceptance Tests (SAT), or Performance Qualification (PQ). The selection of which of these to include depends on the particular asset or system being qualified.

    A Traceability Matrix document is then put together mapping each requirement to the corresponding tests, detailing the user requirement ID and the test script ID. This matrix provides clear visibility of how every user requirement specified in the URS is verified through specific test cases. 

    In this sense, qualification is more likely to be a documentation-driven, structured and checklist-based approach, focusing on verifying that vendor-supplied asset/entity meet predefined specifications. 

    Understanding Validation

    Validation, by contrast, focuses on ensuring that processes consistently produce outcomes that meet predetermined specifications.

    According to the FDA, validation is: “Establishing documented evidence which provides a high degree of assurance that a specific process will consistently produce a product meeting its predetermined specifications and quality attributes.”

    Where qualification proves that an asset/entity is installed and functions correctly, validation demonstrates that the processes using that asset/entity are robust, repeatable, and compliant.  Furthermore, it is even more critical that “effective monitoring and control systems” are put in place for “process performance and product quality, thereby providing assurance of continued suitability and capability of processes”. This is a requirement as per ICHQ10 and continues to be for the last decade one of the most cited observations. The goal in validation and monitoring is not just compliance but smarter, data-driven decisions, continuous improvement being an important pillar. 

    Validation, therefore, is data-driven  and is focussed on data analysis providing the assurance that the process will consistently produce product whose quality meets predetermined specifications.

    Given that qualification and validation have very different approaches and end goals, is it possible to use one software tool to manage both requirements?

    Many software vendors market “all-in-one” applications as a one-stop solution. These apps promise to provide a unified platform for validation.  At first glance, the appeal of a unified system is undeniable. After all, a single tool that manages documentation, qualification, and validation activities across an entire organization sounds like a great solution.

    However, when the design requirement for a qualification activity has absolutely no relationship with what is to be done in validation and ongoing monitoring, how is it possible to combine the use cases?  

    The data type that a qualification management system main data type is textual information. User requirements are primarily captured as detailed text entries within structured tables, and supporting test scripts are also documented largely in text form. While most data centers around descriptive narratives—such as specifications, acceptance criteria, and rationales—numeric data appears mainly when recording process parameters during process qualification (PQ) for equipment. This emphasis on text ensures that every requirement and its verification are clearly documented for traceability, auditability, and regulatory compliance.

    Validation and ongoing monitoring activities are fundamentally centered around the defined product specifications, attribute specifications, equipment process parameter ranges and how these are met. The primary data type handled in this context is numeric, as these processes often involve the collection and analysis of quantifiable results—such as, equipment process parameter range for a noted batch, attribute measured results—to demonstrate compliance with set limits or acceptance criteria. This numerical data provides objective evidence to verify that equipment operate within their approved specifications, and enables continuous, data-driven monitoring to promptly detect deviations or trends. While supporting documentation like SOP may include textual explanations or rationales, it is the structured numeric data that forms the backbone of validation and monitoring efficacy.

    Pharmaceutical regulators expect manufacturers to proactively show processes are in control, detect deviations, implement timely corrections, and continually improve processes to guarantee product safety and efficacy.  While qualification is an integral part of the process validation program and expects all entities/assets used in the product manufacture to be shown to be “fit for use”, the expectation is also timely access to real-time or trending data, making it easy to promptly identify deviations, detect process drifts, or generate automated alerts.

    So an  “all-in-one” application designed to aid in qualification and CSV, can only act as storage repository for files associated with process validation.  Such applications consider validation or ongoing process verification runs as static activities with no ability to let the data speak for itself.   These teams rely on time-consuming, error-prone manual reviews to collate and interpret data, increasing the risk of missed trends and delayed responses to quality issues. Furthermore, static documentation impedes data integration, reduces the potential for cross-functional collaboration, and hinders the effectiveness of continuous improvement initiatives—ultimately compromising regulatory compliance and the robust oversight expected in modern pharmaceutical manufacturing.

    Conclusions

    Ultimately, companies must weigh the true cost of investment, both in terms of efficiency and compliance risk, when selecting validation tools. It may be time for the industry to reconsider whether costly all-in-one solutions are necessary, or if more focused, built for purpose software solutions targeted toward manufacturing process would better serve their needs.

    ravi

    July 27, 2025
    paperless validation
    digital transformation, digitilization, paperless validation, pharmaceutical
  • Can AI help in Validation? Cutting Through the Hype

    Can AI help in Validation? Cutting Through the Hype

    As interest in AI continues to grow, some vendors are promoting AI as tools capable of generating qualification deliverables such as User Requirement Specifications (URS), Functional Specifications (FS), and traceability matrices, with minimal human input. This article provides a grounded, practical look at what AI can and cannot do in the context of URS and FS creation and automating traceability matrix. It explains how LLM models work; what URS and FS documents are, and how they fit in the qualification process; it also looks at tests we ran with custom LLMs to automate reconciliation as in automating generation of a traceability matrix, and the results obtained; and suggests where AI may offer limited support – while highlighting why human ownership and input is still required for critical tasks like URS and FS development.


    Introduction: Opportunity vs. Overpromise

    The rise of AI tools like ChatGPT has sparked broad interest across regulated industries. In pharmaceutical, biotech, medical device, and cosmetics sectors, companies are exploring how these tools might reduce documentation workload, increase consistency, and support compliance efforts.

    However, alongside this enthusiasm, some vendors have begun promoting the idea that large language models (LLMs) can generate qualification documents – such as User Requirements Specifications (URS), Functional Specifications (FS), or even test scripts – with a simple prompt.

    While these claims are designed to attract attention, they risk oversimplifying what these qualification documents really are, how they are developed – and where, if at all, AI actually fits into the process.

    This article explains the role and requirements of key qualification documents, explores how LLMs like ChatGPT work, and examines where AI tools can – and cannot – be appropriately applied to support the qualification process.

    What a URS Actually Does – and Why It Matters

    A URS (User Requirements Specification) is a critical foundation for any qualified entity. It defines what the user needs the system, equipment, utility, or software to do – in terms that are clear, measurable, and auditable.

    A robust URS is not a generic checklist. It must be developed by knowledgeable stakeholders who understand:

    • The intended use of the system,
    • Key process and operational parameters (e.g., throughput, control logic, data handling),
    • The regulatory framework and internal quality expectations,
    • Integration points with other systems (e.g., MES, SCADA, LIMS),
    • Environmental, safety, and compliance constraints.

    A well-defined URS supports:

    • Effective vendor selection and procurement,
    • A clear basis for design and testing,
    • Alignment with qualification protocols and traceability,
    • Compliance with GxP expectations for “fitness for intended use.”

    When a URS is vague or incomplete, the consequences can be significant: unsuitable equipment, misaligned vendor deliverables, inefficient qualification efforts, or gaps that lead to inspection findings and remediation. Poor input at the URS stage can therefore compromise the entire lifecycle of the system – including its qualification.

    How LLMs Like ChatGPT Actually Work

    How LLMs function is often misunderstood. Tools like ChatGPT do not think, reason, or understand. In fact, they generate text by predicting the most statistically likely next word in a sentence, based on patterns learned from massive volumes of publicly available internet text.

    In practice, this means:

    • They do not understand context in the way a human expert does,
    • They do not verify accuracy,
    • And they do not apply regulatory logic or domain-specific judgment.

    Critically, high-quality URSs, FSs, and qualification scripts are typically internal and proprietary – not available in the public domain. As a result, these documents are not well represented in the training data used to build general-purpose language models.

    This creates a risk of ‘hallucination’: that is, AI can sometimes confidently generate text that sounds correct but is factually inaccurate or entirely fabricated. In regulated environments, this presents an obvious risk – especially when such content is used to support compliance activities.

    Can LLMs Write a URS? What’s the Real Limitation?

    When asked to generate a URS, an LLM can produce something that looks well-formatted – but that document is likely to be too generic or vague to be useful in practice.

    For example, asking ChatGPT to generate a URS for a “sifter” will not trigger the LLM to ask clarifying questions on aspects such as:

    • Process requirements: What is the role of the sifter in the process? For example, it could be particle size separation, de-lumping, or scalping. Or which screening motion – centrifugal, vibratory, or gyratory – is best suited for your materials? What are the in-process material characteristics (e.g., moisture content, particle size distribution)? How will the sifter integrate with upstream or downstream equipment, like granulators or during presses – by using gravity-fed or vacuum transfer systems? Is the sifter intended for batch processing or continuous operation?
    • Performance requirements: What throughput is needed? Are there maximum allowable noise or vibration levels to comply with workplace safety standards? What is the acceptable level of material loss or retention during the process?
    • Design and construction requirements: What are the material of construction (MOC) requirements? What specific surface finish (e.g., Ra < 0.8 µm), required? What are the requirements for welding and polish?  What utilities are available, like compressed air pressure, electrical supply, or vacuum systems, to support its operation? Does it need tool-less disassembly for cleaning? What are the cleaning needs (e.g., CIP, SIP, manual)?
    • Controls and automation: Will the sifter be integrated with a Supervisory Control and Data Acquisition (SCADA) system or Programmable Logic Controller (PLC)? Are alarms, interlocks, or data logging required? Should it support electronic batch records or interface with Manufacturing Execution System (MES)? Is Process Analytical Technology (PAT)-driven monitoring required?
    • Environmental or containment needs: Is operator protection (e.g., for potent APIs) required? What classification zone applies?  For example, if OEB 5 containment is required, pneumatic clamping mechanisms have to be put in place.

    These are essential details – and they define far more than the basic equipment type. They shape how the system will be designed, selected, installed, qualified, and operated. Without them, the URS cannot serve its purpose as a regulatory and procurement document.

    Even if LLMs are fine-tuned on customer data, key questions remain:

    • Who validates the quality and compliance of the source material?
    • Is the model version-controlled?
    • Is the data private – or potentially exposed across users or systems?
    • Will the AI output reflect your company’s unique process – or generalize from unrelated inputs?

    And if subject matter experts must still correct or rebuild the draft, it’s worth asking: is the AI truly reducing effort, or simply introducing another layer of review?

    Functional Specifications: A Vendor’s Responsibility

    Once the URS is approved, the FS (Functional Specification) is typically authored by the vendor, service provider, or internal engineering team. The FS outlines how the solution will meet the defined user requirements – through system design, control logic, automation behavior, configuration, and integration.

    Authoring an FS requires:

    • Proprietary system knowledge,
    • Awareness of design constraints,
    • Understanding of operational and regulatory requirements,
    • Collaboration across quality, engineering, IT, and qualification functions.

    The FS is the vendor’s responsibility – it represents their response to the URS. And this raises an important point: why would a vendor delegate that responsibility to an AI tool that doesn’t understand their product? Even with access to a URS, an LLM cannot independently determine how a system meets those requirements without being explicitly told.

    The same applies to qualification test scripts. These are step-by-step instructions used to verify whether the system meets its documented requirements. They must be traceable, reproducible, and inspection-ready – and they require domain knowledge to be meaningful. LLMs cannot reliably generate test descriptions because they don’t understand how the system functions, what needs to be tested, or how to design a test that produces meaningful, validated results.

    If generating test scripts were as simple as prompting an AI, most qualification tools would be fully automated by now. But they are not – because test design remains a technical, judgment-based process.

    Can LLMs Automate the Process of Filling a Traceability Matrix?

    Some vendors have been marketing their ability to automate traceability, namely by using LLMs to identify which test script aligns with a given URS requirement. But is it really possible to perform this task truly reliably using AI?

    One approach would be to use a type of LLM known as a sentence transformer model, a class of AI designed to compare entire sentences rather than individual words. These models can capture semantic relationships between phrases – for example, comparing the intent of a URS requirement with the description in a test script (be it an IQ, OQ, or PQ) – and then rank the closest matches accordingly, to select which script number matches each URS requirement.

    Transformer models are trained on billions of data points and can encode the meaning of phrases or entire sentences, not just individual words. However, their reported accuracy leaves some questions.

    Here are the published accuracy figures for some commonly used transformer models:

    ModelReported Accuracy
    all-distilroberta-v1~85%
    all-MiniLM-L12-v2~80%
    paraphrase-MiniLM-L6-v2~70%

    To assess performance in a qualification context, we tested two of these models – all-distilroberta-v1 and all-MiniLM-L12-v2 – by asking them to match 50 URS statements against 50 corresponding functional requirements.

    Despite both documents being clearly written, the LLMs mismatched 11 of the 50 statements to the wrong requirements – more than 20%. Even when combining different transformer models to improve accuracy, we never obtained an accuracy of 90%. Of note, this occurred even when both input sets were well structured. In real-world scenarios, where URSs and scripts often vary in phrasing, specificity, or formatting, this represents a serious challenge to the idea that LLMs can automate traceability.

    For matching URS requirements to test scripts the accuracy shortfall would become even more pronounced, as scripts are written in less descriptive and more procedural language than FSs – simply describing steps to be executed, and not their intent. Sentence transformer models are not yet capable of reliably inferring test intent from such procedural language. The point then is that if human experts still need to manually verify each AI-suggested match, what, if any, are the efficiency gains?

    It’s also worth noting that these models are more specialized than general-purpose LLMs like ChatGPT or Mistral. Therefore if transformer-based models – trained specifically for sentence matching – struggle with this task, non-specialised models will likely perform worse.

    Where LLMs Might Help

    Despite these limitations, LLMs can still support certain tasks – particularly when used under SME supervision in low-risk contexts. Potential use cases include:

    • Drafting boilerplate content (e.g., introductory or regulatory reference sections),
    • Improving clarity, grammar, or formatting,
    • Suggesting document structures or outlines,

    These kinds of low-risk, repeatable, and time-consuming activities are suitable areas for automation, with AI acting as a support tool. However, even in these cases, outputs need to be reviewed and approved by qualified personnel to ensure regulatory accuracy and contextual appropriateness.

    Final Thoughts

    The importance of accuracy in qualification, and the limitations of LLMs mean human input is still essential. While LLMs can support low-risk, repetitive tasks like drafting boilerplate content or improving formatting, their role in generating qualification documents remains limited.

    By definition, a URS must reflect what human users need from that asset/service to be purchased – something no AI can independently determine. These documents require expert judgment, deep process understanding, and context-specific decisions that LLMs simply cannot replicate, at this level of maturity.

    Similarly, the use of AI for automating traceability – such as matching URS statements to test steps – still falls short. Even advanced sentence transformer models show accuracy rates that are too low for compliance-critical tasks. In regulated environments, a 20% error rate isn’t just inefficient – it’s dangerous.

    Errors in qualification can have serious consequences: from costly equipment failures to regulatory violations or, worse, risks to patient safety. That’s why experienced professionals must remain in control of the process.

    In short, LLMs may enhance productivity in certain areas, but when it comes to URS, FS, and traceability matrix, human expertise, oversight, and responsibility are irreplaceable.

    ravi

    June 18, 2025
    Uncategorized
    AI, pharmaceutical, validation
  • The ROI of eLog Pro: Not Just Financial Benefits for Pharmaceutical Manufacturers

    The ROI of eLog Pro: Not Just Financial Benefits for Pharmaceutical Manufacturers

    Synopsis

    This article explores how eLog Pro, Quascenta’s electronic GMP logbook software, helps pharmaceutical manufacturers to achieve GMP compliance and boost operational efficiency. It details the limitations of paper-based logbooks and shows how digital documentation improves data accuracy, reduces human errors, and streamlines regulatory processes. Featuring mobile data capture, offline access, and seamless equipment integration, eLog Pro supports real-time collaboration and audit readiness. The platform delivers significant ROI through reduced costs, increased productivity, and improved data integrity—making it a vital tool for pharma digital transformation in GMP environments.

    Why eLog Pro?

    Pharmaceutical manufacturers operate in a highly regulated environment where comprehensive documentation of routine operations, including equipment usage, cleaning, calibration, and environmental conditions, is necessary for complying with Good Manufacturing Practices (GMP). Traditionally, this has meant maintaining thousands of paper-based logbooks – an approach that is unfortunately inefficient, error-prone, and costly.

    Electronic GMP logs however, such as Quascenta’s eLog Pro, can offer a user-friendly software solution that transforms how pharmaceutical facilities manage GMP activities. By replacing traditional paper-format logs with a streamlined digital platform, Quascenta’s eLog Pro can capture data efficiently, enabling real-time collaboration, automated compliance, and significant cost savings. Beyond improving operational efficiency, it also delivers a strong return on investment (ROI) for adopters by reducing material costs, minimizing errors, and freeing up valuable personnel time.

    Streamlined Data Capture

    One of the main inefficiencies of paper-based logbooks is the time and effort required for manual data entry. With eLog Pro, operators can enter data directly into the system, eliminating the risk of transcription errors to ensure greater accuracy in record-keeping. The platform also supports mobile devices, allowing personnel to record events and observations in real time, rather than relying on handwritten notes that must be transcribed later. Multi-user access enables multiple team members to contribute to log records simultaneously too, adding to the efficiency gains for your team.

    In manufacturing or R&D environments, data often needs to be collected near equipment or instruments where Wi-Fi may be unavailable— however eLog Pro allows data entry even while offline. The system then synchronizes securely with the cloud once connectivity is restored – e.g. at the end of the day or after shifts – through a process secured by robust cybersecurity safeguards.

    By reducing manual errors and automating data entry, companies not only save time but also minimize costly deviations and investigations. Investigating and resolving data discrepancies in a GMP environment is a resource-intensive process, often requiring hours of additional work from quality assurance teams. With eLog Pro, data integrity is ensured from the outset, reducing the likelihood of such costly errors.

    Improved Visibility and Collaboration

    Access to accurate, up-to-date log records is critical for smooth operations and regulatory compliance. eLog Pro provides real-time access to log data from any location, eliminating delays associated with searching for paper records. A centralized digital repository ensures that all teams work with the same accurate data, preventing duplication and miscommunication.

    Improved coordination between departments leads to better decision-making. When maintenance teams, quality personnel, and operations staff have instant access to log records, they can collaborate more effectively, reducing downtime while ensuring continued compliance with GMP standards. The ability to quickly retrieve historical data also helps make audits and inspections go more smoothly, saving time and reducing the associated stress of compliance reviews.

    Data-Driven Decision-Making

    Traditional logbooks provide limited opportunities for analysis. Paper records make it difficult to identify trends, investigate recurring issues, or make data-driven decisions. eLog Pro changes this by offering powerful data analysis tools that enable companies to identify patterns, conduct root cause analysis, and take proactive measures to prevent issues before they escalate.

    For example, a facility using paper-based logbooks may not immediately recognize recurring equipment failures, leading to repeated production stoppages. With eLog Pro, trend analysis can highlight patterns of equipment malfunctions, allowing maintenance teams to address underlying issues before they result in costly downtime. Similarly, deviations and compliance issues can be tracked and analyzed in real time, improving overall process reliability.

    Regulatory Compliance Simplified

    Ensuring compliance with regulatory requirements is one of the most time-consuming aspects of pharmaceutical manufacturing. Paper-based logbooks require extensive manual effort to maintain, verify, and retrieve during audits. eLog Pro simplifies this process by automatically generating complete and accurate audit trails, ensuring that all log entries and modifications are recorded transparently.

    Electronic signatures further enhance compliance by helping to ensure data authenticity and integrity, to meet FDA 21 CFR Part 11 and other regulatory requirements. Secure cloud-based storage eliminates the risk of lost or damaged records, a common issue with paper logbooks. By streamlining audit preparation and reducing the risk of non-compliance, eLog Pro helps companies avoid costly penalties and regulatory delays.

    Financial Benefits: A Breakdown of Cost Savings

    Implementing eLog Pro leads to substantial cost savings by eliminating expenses associated with paper-based logbooks. The savings begin with the direct costs of paper, printing, and storage but extend beyond that to include labor efficiencies and error reduction.

    For example, a typical pharmaceutical site may maintain logbooks for equipment usage, cleanroom monitoring, warehouse environmental conditions, and calibration records. For a site managing 500 pieces of equipment, maintaining paper logbooks can be a significant expense.

    • The cost of printing and binding a 100-page logbook is approximately $15 per logbook.
    • Maintaining 500 logbooks per site results in an estimated monthly printing cost of $7,500, translating to $90,000 annually.
    • Additional logbooks for environmental monitoring and calibration can add another $15,000 to $60,000 in yearly costs.

    These figures do not account for the indirect costs of managing paper records. For example, paper logbooks require climate-controlled storage to prevent deterioration over time, adding facility and maintenance expenses. Digital logbooks eliminate these storage costs while also reducing the risk of lost or misplaced records.

    Labor savings also contribute significantly to ROI. Employees spend considerable time manually entering data, reviewing logbooks for completeness, and correcting errors. Transitioning to eLog Pro reduces administrative burden, freeing up personnel for more value-added tasks. Additionally, automated workflows and instant data retrieval speed up routine operations, improving overall productivity.

    Conclusion: A Clear Return on Investment

    Quascenta’s eLog Pro is more than just an electronic logbook—it is a strategic investment in efficiency, compliance, and cost reduction. By transitioning from paper-based logbooks to a digital solution, pharmaceutical companies can achieve significant savings on materials, labor, and storage while improving data accuracy and regulatory readiness.

    With a rapid implementation timeline and an intuitive interface, eLog Pro allows companies to start realizing benefits within weeks. As the industry moves toward increased digitalization, adopting a solution like eLog Pro is a crucial step in optimizing manufacturing processes, ensuring compliance, and maximizing profitability.

    For companies looking to improve GMP log management, eLog Pro offers a proven solution that delivers measurable financial and operational benefits. Request a demo today to see how it can transform your facility.

    Quascenta Team

    May 26, 2025
    Electronic Log, Uncategorized
    electronic log book, pharmaceutical
  • Digitalization in Pharma: How to make the switch

    Digitalization in Pharma: How to make the switch

    Synopsis

    A robust digital ecosystem streamlines processes and improves data management in the pharmaceutical industry, but conflicting priorities can create disconnected software silos. Because digitization needs continuous investment, steady incremental progress is better than sporadic efforts. Success depends on strong leadership, a committed and skilled implementation team, and solid management and stakeholder support.

    Introduction

    The COVID-19 pandemic signalled a shift in mindset for the pharmaceutical industry. Almost overnight, many companies were forced to move from traditional paper documents and desktop-based applications to a digital, remote-working ecosystem. 

    Yet, when the SARS-CoV-2 virus went into an endemic state and employees returned to work, many of the processes that had been converted to the digital domain remained that way.  They were more efficient, drove better compliance and saved the company resources and money, it transpired. And so, willingly or not, pharmaceutical companies started embracing digitalization.

    The industry had, of course, been using software applications before the pandemic, such as Enterprise Resource Planning (ERP), Quality Management Systems (QMS), and Laboratory Information Management Systems (LIMS). But the pandemic instigated a drive to move all processes to digital platforms. Now, as companies increasingly invest in software applications and gravitate toward connected factories and sites, it has become clear that there are significant challenges involved in integrating both hardware and software systems.

    Challenges

    One of the challenges pharmaceutical companies have faced relates to the migration away from paper-based records. A batch manufacturing record (BMR), for example, must be maintained for every product batch, and is typically recorded on paper. The same is true of the logs that track everything from daily equipment use to weighing balance verifications. The shift to digital during the pandemic, however, meant that these records had to be captured on tablet devices instead. For staff to move freely around a facility, from equipment to equipment, these tablets require wireless connectivity to a central server or the cloud. And therein lies the problem – existing cleanroom infrastructures were not designed for wireless connectivity.  Cleanrooms need to be retrofitted with wireless routers.  And this also brings up a whole host of cybersecurity concerns, such as setting up an effective ransomware mitigation plan.

    Another challenge is choosing the right software. No application can manage an entire site, so companies must select and integrate a series of software tools for certain tasks, activities, and processes without duplicating them, which is where things can get confusing. Process parameter data, for example, is captured in both a BMR and a Process Performance Qualification (PPQ) run record or report, while the data in an Annual Quality Review (AQR) Report comes from disparate sources, with some requiring analyses by statistical tools.

    Defining digital requirements

    The first step to a successful digital transformation program is identifying  the activities being carried out at a manufacturing site.  A plan must then be put together to discern which of them can and should be digitized.

    Not every activity needs to be digitized immediately, however. Just like a solo house buyer might consider a multi-bedroom property, leaving space for future expansion when it comes to digital applications is a sensible move.

    Who decides which activities should go digital? Many companies have created a new team or point person, responsible for getting departments to share their digitization requirements. Approving the overall requirement, meanwhile, is a job for the site head. It’s a critical activity, one that’s similar to putting up a pharmaceutical facility.

    The site head has the power to allocate money across multiple budgets, even when the purse strings are tight, and he is the right person to form a team that that has collective knowledge of the activities taking place at the site.

    Setting goals

    Once the activities being moved to the digital domain have been identified, specific goals need to be decided on. Many companies prioritize the ability to see and analyze data on demand while others want to streamline their practices before the workflow is digitized, doing away with non-value-adding activities. The latter is a critical issue, one that can lead to heated discussions. In company meetings, it’s common to hear the expression: “If it ain’t broke, don’t fix it”. This is where the site head can offer a (gentle) guiding hand to make sure that digitalization and all the efficiency it brings isn’t sacrificed through fear of disrupting the status quo.

    A connected digital platform will generate a large amount of data. That’s why it’s important to decide which part of the collected data is to be analyzed and how.  Companies may also find that some of the data has no purpose.  In which case, they need to decide whether it’s worth storing.  In the same vein, it’s important to examine the relevant procedures and forms to identify those that will become obsolete once transferred to the digital domain. For example, equipment usage log data can be obtained from the Manufacturing Execution System (MES), so why capture it in a logbook?It’s critical to ‘spring clean’ what data is to be collected and from where before moving it to the digital realm. 

    Another question to ask is whether the technology currently available can be leveraged to improve efficiency. For instance, digitalization has already impacted training with classroom training being replaced in some cases by virtual reality training, while videos have replaced many stepwise procedures.

    Creating a requirements document

    Once the workflows and associated activities have been sanitized and streamlined, a site requirement document needs to be created. The priority should be on marking mandatory requirements – all others can  be thought of as ‘nice-to-haves’.

    A company with multiple manufacturing sites might even consider a global document covering all locations.

    Ultimately,  the goal is to define a system with seamless data flows. This is where an (internal or external) IT source with software architecture knowledge can be included in the team. This resource can assist the team with developing the digital platform’s architecture.

    Revising the requirements document

    Next, the team needs to start looking for applications that are available on the market. The goal at this stage is to find applications that meet all the mandatory requirements. Some of these requirements may already be covered by existing company applications such as ERP, LIMS, and QMS, and it’s possible that new requirements arise when reviewing applications. On the other hand, there may be requirements – specially mandatory – that aren’t met by existing applications (more on how to handle these later in the article).

    Once the team has defined its requirements and compared them with what’s available on the market, it may need to then include new requirements or redefine existing ones in its documentation. Applications of interest should be categorized into groups covering the same activities. If some  overlap, the application  can  be incorporated into multiple groups. Then, the documentation should  be restructured to define the requirements for each application group. With the help of the IT resource, details about how these application groups will ‘talk to’ each other should also be noted in the document.

    One way forward involves identifying applications that can form the nucleus of the digital platform, with other applications interfacing through the nucleus via an Application Programming Interface (API).  An API allows two software applications to ‘speak’ to each other. This streamlines data flows between applications.

    Data could be transferred from one application to the other but that would be inefficient.  For example, if all the receiving application wants to know is  when production on a piece of equipment started and ended, why provide extraneous information? The right design then is to identify the fields of interest and only query them when needed. 

    The other option is to consider a data hub. Here, data from disparate sources is assimilated and stored in a database that can be used by any application. The advantage of this kind of architecture is that it moves away from data duplication, avoiding  data inconsistencies and enhancing data integrity. Care should be taken though to ensure that a clear audit trail identifies the changes any application makes to the data and that data changes are broadcast to all applications. This immediately reveals the impact of any changes made.

    It’s also important to specify the location where an application will be hosted and who it will be hosted by. This is where the team’s input becomes crucial.

    Storage

    Something else to consider is where the applications and the data will reside. The software industry is increasingly moving to the cloud. The main driver behind this is the ability to provide instant support, and, more importantly, to put all infrastructure under the software vendor’s control.

    A problem with site-based servers is that they require localized on-site support, and this has become very expensive. It’s also well documented that multi-tenant Software as a Subscription (SaaS) applications can be rolled out faster than site-based installations. This is due to the time saved in avoiding application installations and leveraging vendor-executed qualification documentation. The reason why the vendor executed documents can be leveraged is because the application has already been installed and qualified.  Repeating the same test for each site does not add any value, except when executing performance qualification. 

    Some vendors only provide applications on a multi-tenant SaaS basis. If some of the applications are running on a local server or on the company’s cloud account, the type of data architecture to use when integrating these applications is the next thing to address. 

    Once the requirement document has been revised, the team may prefer splitting the document for vendor evaluation. Care should, however, be taken to ensure that the relevant interface requirements are properly spelled out in the “child requirement documents”.

    Finding a vendor

    Selecting a vendor is critical to successfully setting up a complete digital platform. Some make the mistake of sharing the requirement document with the vendor and then asking it to self-certify whether all requirements are being met. This can lead to either:

    (a) the company only discovering missing features after the software has been purchased, or

    (b) the vendor hastily developing the missing features and then releasing them without proper testing.

    Neither of these options leads to productive outcomes. They usually result in delays caused by incomplete platform setups and sometimes even the selection of a new vendor. The digitalization team must, therefore, verify whether the requirements are being met and make notes on those that aren’t, along with a reason why. This can still lead to the vendor being asked to develop missing features, but it will, at least, be an informed decision.

    Another common pitfall is making decisions based on presentations that rely on scripted videos. In the era of Theranos-type scams, it’s not always clear if a product  demo represents a prototype or a working application.  This is why it’s important for the team to verify the product’s minimal functionalities. Company data might need to be anonymized when checking requirements against the features being presented.  This anonymized data can then be fed into the application and the process verified. This should be a mandatory requirement.

    Some applications require specialist knowledge (e.g., process validation and cleaning validation). So naturally, an application designed by a subject matter expert will be of a higher quality and better suit users’ needs. This becomes very apparent during the demo as the workflow tends to be much smoother. 

    It may also be a good idea to consider a vendor who can provide software for multiple applications, thereby ensuring interconnectivity between them all. But care should be taken to ensure that the vendor has expertise in the other applications of interest. For example, it could be that the vendor is known for their Manufacturing Execution System (MES) application but not for their process validation application.  In this case, it’s better to consider applications from separate vendors. 

    Once a vendor has been decided on, the team then faces the challenging task of matching their requirements to the application. Only once these applications are identified can they begin to appraise vendors, discuss goals, and share requirements. Vendors should also be encouraged to look out for and highlight any potential implementation roadblocks so that they can clear them early on.

    What should the digitalization team do when requirements are not supported by any existing applications? Should it develop solutions on its own? In this case, reflection is key. As a company in the drug manufacturing business – not software development – it’s best to stay in the right lane. There are very few pharmaceutical manufacturers that have successfully developed and managed applications by themselves.

    The problem does not lie in the software’s development phase but rather in its maintenance, where both the underlying framework and the application itself need to be kept up-to-date. It’s at this point, especially in the manufacturing sector, that a third party vendor is usually called in to update the code. But what happens when the technology stock used to build these applications becomes obsolete – will another application be developed? Who will test the application? The self-development of applications often leads down a rabbit hole of issues such as these. In some companies, there’s something of a ‘software graveyard’ where many tombstones point to abandoned in-house software developments. The best strategy is, then, to identify a suitable vendor who knows what they’re doing.

    Qualifying and rolling out applications

    Many applications fail early on because some members of the team tasked with implementing the roll-out, i.e., qualification of the system followed by training all the associated users and implementing changes to affected procedures, do not believe that the application they’ve purchased will meet the relevant requirements.

    Procedural delays and red tape caused by objections can stretch out application implementation times. It’s therefore important that the team leader lay out a clear timeline. Site management should also provide support for addressing delays (be it actual or based on perceptions).

    Sometimes it’s just not possible to roll out all the features of an application at once. This might be due to network or extra hardware requirements. In such cases, the application roll-out can be divided into phases that each have a clearly defined execution timeline. Site management should ensure that an adequate budget is available for rolling out the later phases.

    Qualifying the application can also drain a significant amount of time. The team must, therefore, draw up a clear map of what needs to be qualified. If dealing with a SaaS application, then it will need to discern whether the vendor qualification document can be leveraged rather than repeating the qualification process.

    A clear demarcation must also be made when it comes to establishing what the requirements will be if the application is hosted on a local server instead of a SaaS application. If this is not done, then the computer systems validation team will often create its own installation qualification document in an ad hoc fashion. This can result in an auditor questioning how the team performed an installation on a server that was not its own. 

    A long-term commitment

    Unlocking the full potential of a digital ecosystem demands ongoing evolution and adaptation. Regular assessments and strategic adjustments are essential for its continued success. Changes in technology, regulations, vendor status, and the like can dictate various requirement changes, while activities that could not previously be brought onto the digital platform can be successfully integrated. It’s therefore crucial that pharmaceutical companies look to maintain ongoing digital transformations into the future.

    From defining digital requirements and documenting them, to finding the right vendor and finally rolling out applications, leadership is key on the journey to digitalization. It plays a central role in both budgeting to keep the platform fully functional and ensuring that the right team members are retained.  This team will be responsible for carrying out an annual review of the digital platform and recommending course corrections as required. In all this, there must be active and constructive leadership support.

    In today’s interconnected world, we can harness and leverage data to our advantage. It’s crucial that we understand how to effectively use data and then translate that understanding into action that enhances existing processes. The members of an agile and efficient digital transformation team serve as skilled navigators capable of interpreting ever-changing circumstances and adapting to a shifting landscape.

    Quascenta Team

    April 28, 2025
    Uncategorized
    digital transformation, digitilization, pharmaceutical

Powering Quality from Lab to Label © 2025, Quascenta Pte. Ltd.