Wednesday, December 30, 2009

Data Supporting FDA Approval for Cardiovascular Devices Called into Question

Key Points:

Device data submitted to FDA have unreliable endpoints, little randomization
Discrepancies found in patient numbers and ages
Call for more rigorous scientific standards as basis for approvals

Studies used to support US Food and Drug Administration (FDA) approval of cardiovascular devices often are scientifically inadequate and may be prone to bias, according to a study in the December 23/30, 2009, issue of the Journal of the American Medical Association. The review of 123 summaries of safety and effectiveness data found that 65% of implantable or invasive devices were approved based on data from a single study.
Researchers led by Rita F. Redberg, MD, MSc, of the University of California, San Francisco (San Francisco, CA), analyzed the methodology and primary endpoints used in studies submitted to the FDA as supportive evidence for 78 cardiovascular devices that received pre-market approval between January 2000 and December 2007.
Of the 123 studies, 80% reported the number of participants enrolled, 27% were randomized, and 14% were blinded. Some device groups had a higher proportion of randomized and blinded studies. Of 24 studies of cardiac stents, for example, 54% were randomized and 46% were blinded.
No Endpoints for Some, Discrepancies for Others
Overall, 14% of the 123 studies had no primary endpoint stated. Of those with primary endpoints, the vast majority were surrogates, which may not be reliable predictors of actual patient benefit, the authors write.
Seventy-eight percent of studies that included data for which both the number enrolled and analyzed were stated contained some discrepancy between the 2 numbers. The major discrepancy noted was that more patients were enrolled than analyzed. Other discrepancies occurred in relation to mean age and number of patients enrolled by sex and race. Such discrepancies may introduce bias, because patients with less favorable outcomes may be lost to follow-up, which might cause safety concerns to be overlooked.
The researchers also found that 15% of the primary endpoints were noninterpretable. The most common reason was that no target goal for device performance was stated (78%), and in one instance the results themselves were not stated. Follow-up periods varied by type of device with the longest median follow-up time for primary endpoint analysis being 365 days for intracardiac devices and endovascular grafts and the shortest follow-up time being just 1 day for hemostasis devices.

Clinicians Look to FDA for Guidance

Dr. Redberg and colleagues point out that the FDA has much more experience with drug approvals than with device approvals, which only began in 1976 with the FDA Device Amendment. In addition, there has been a significant increase in the number and complexity of devices. Nevertheless, the FDA approval process is crucial to a vast spectrum of players in the health care system from clinicians to insurers.
“The importance of the ‘seal of FDA approval’ cannot be overstated. Many manufacturers immediately encourage widespread use of their devices based on FDA approval through direct-to-consumer advertising, detailing to physicians, and continuing medical education venues,” the authors write. They add that since FDA-reviewed data are the only evidence clinicians can analyze to evaluate the basis for the agency’s decision and form the sole basis for systematic reviews and guideline development, the study reinforces the need for improved access to complete FDA reviews for both pharmaceutical and device data.
“To uphold the FDA’s mission of ensuring ‘safe and effective’ medical devices, it is essential that high-quality studies and data are available,” they conclude.