In Digital Age, Pharmacovigilance Must Find Ways to Control Data Overload and Minimize Risk
Jun 15, 2016
By Sharad Prakash
The lifecycle associated with managing the safety of products on the market has gone haywire. Not only has event reporting grown in scale, method and source, but regulatory authorities’ expectations about risk minimization activities has intensified. Companies need to find ways to wrap their arms around these converging trends and demands in order to control, manage and effectively deal with the safety assessment lifecycle.
Until more recently, companies would collect adverse events reports from a relatively narrow number of sources – mostly physicians and investigators — and avenues– email, faxes, call centers. Events were processed in a database, an assessment made of the causal association between the drug and the event, and reported to the regulatory authorities and, where relevant, to marketing partners. Serious events, or non-serious events not mentioned in the package insert leaflet or product safety specifications, then required that companies undertake risk minimization activities – as they still do.
Indeed, the goal has always been, and continues to be, to work toward risk minimization in order to ensure patient safety.
Today, however, digitization has changed the landscape. Events are reported electronically, and come from many sources: literature databases, call centers, social media, hospitals, healthcare centers, in-house clinical safety and public databases established by each regulatory authority.
Toward Structured Safety Surveillance
With so much data, it would be impossible to assess each report individually, nor would it be beneficial to do so given that an adverse event could be unrelated to the medicine in question. The push is therefore toward doing analysis at the aggregate level through signal detection or signal management: how do you identify signals from multiple safety sources, reconcile them and determine whether there is a causal relationship that has not previously been observed and documented.
This requires fairly a sophisticated suite of tools that allow safety departments to look into these collective sources and apply qualitative and quantitative techniques such as case series, aggregate analysis and advanced text and data mining methods to discover signals indicating previously unknown associations.
Once those signals have been identified, there’s a whole lifecycle – triage, validation, assessment, classification of signal to risk and subsequent actions. That’s where risk minimization activities kick in. Companies must not only notify the patient community and health authorities, but take proactive steps, such as changing drug labels, initiating “dear doctor” communications, creating patient education programs, and in some cases initiating post market safety studies to determine the risk-benefit ratio.
The next step is to measure the effectiveness of these steps and determine whether they have achieved risk minimization, and so the cycle of signal detection begins again.
From the point of view of risk minimization and patient safety, more data from more sources, with appropriately analysed and understood and appropriate steps put in place, is welcome.
The reality for companies, however, is that these steps haven’t been easy to put in place. The problem lies in processes and tools that aren’t harmonized. Companies might have a tool that collects data from literature searches, one that scans social media, and another that just looks at authority or proprietary clinical safety databases. Even if companies do gather the requisite safety signals they must then manage those signals in a structured and efficient manner without over-burdening safety teams.
If the data quality is poor or incomplete, or the management of safety signals is ad hoc, it will likely lead to inaccurate conclusions. Failing to pick up potential safety issues early enough harms patients and the company.
As regulatory authorities push toward a systematic approach to benefit-risk assessment, companies need to adopt end-to-end processes and solutions that not only integrate data from the relevant sources but also provide a structured framework for the identification, management and communication of safety concerns. Having all the information in one place makes it easier for medical reviewers to assess all the sources to make effective decisions.
An end-to-end solution should also enable more sophisticated management of the data through structured, systematic processes, and thorough risk mitigation measures. In addition, it should enable companies to quickly and easily produce reports or documents that highlight signals accompanied by a full range of decisions. Having these reports at hand during inspections will help companies demonstrate their ability to continuously monitor and manage the safety profile. Furthermore, for a global life sciences company, there is a pressing need to be able to categorize signals into different risks and identify the specific risk minimization activities that are being put in place. These will inevitably vary from one region to the next, depending on local regulations, so being able to tailor measures to each market is imperative.
By addressing the potential pitfalls, vulnerabilities and challenges within signal and risk management across the lifecycle, companies will not only protect patients, but protect their products and safeguard the company’s reputation.
In my next blog, I’ll dive into the thinking behind integrated signal detection and risk management as a competitive weapon. But in advance of that, you can find me at our booth (#1925) at this year’s DIA Annual conference in Philadelphia. Drop me an email and we’ll secure a time.