echemi logo
Product
  • Product
  • Supplier
  • Inquiry
    Home > Active Ingredient News > Drugs Articles > The latest research finds that evidence-based medicine has "defects"

    The latest research finds that evidence-based medicine has "defects"

    • Last Update: 2015-06-01
    • Source: Internet
    • Author: User
    Search more information of high quality chemicals, good prices and reliable suppliers, visit www.echemi.com
    Source: health author: Jiang Hua, Yang Hao, Peng Jin, 2015-06-01 in the past 20 years, the methodological paradigm governing the whole clinical research field is evidence-based medicine For clinicians, evidence-based medicine represents three things: large sample prospective clinical trials, especially large sample randomized controlled trials (RCTs); meta- Analysis; evidence-based guidance Evidence based guidelines are based on RCT and meta analysis, especially the latter In the definition of Cochrane collaborative network, an authoritative international organization of evidence-based medicine, they are so-called "highest level" clinical evidence, and are considered as the most important scientific basis for making guidelines and guiding clinicians to make diagnosis and treatment decisions The origin of meta analysis: Meta analysis, in short, is the combination of published clinical trial data after some standardized processing, and then look at the results after the combination What are the differences or similarities with the original single study Since clinical trials can be done, why do we need meta analysis? As doctors and scientists who have conducted meta-analysis since the beginning of evidence-based medicine in China, we believe that the following reasons are main: ● the sample size of most clinical trials is not large enough When the sample size is small, its ability to verify the hypothesis is low, but the cost of large sample clinical trials is high The combination of clinical trials from many researchers through meta analysis can quickly increase the sample size without increasing the cost, thus reducing the cost of reaching the conclusion required for effective sample size ● even though the number of individual test samples is increasing in recent years, large-scale research with larger samples is also influenced by subjective and objective factors of researchers and funders The design may not be very reasonable, and the conclusions are often mixed Meta analysis, which strictly follows international standards, will sort out these factors in an all-round way, so as to clarify ideas for some controversial issues In this sense, meta analysis has a certain position as a clinical trial judge ● since the subjects of many clinical trials have some similarity, why not combine the data of these seemingly similar trials? For clinical researchers in the second half of the last century, these reasons are powerful Therefore, meta analysis develops and flourishes greatly At the beginning of the rise of meta-analysis, it provided a good way to solve some important clinical disputes and became a landmark clinical scientific research tool However, meta-analysis is inherently deficient in methodology This is called "heterogeneity" So, what is heterogeneity? This word is a bit awkward for readers outside of the statistical specialty, but it is helpful to understand the antonym of it That is "homeogeneity", which can be intuitively understood as the similarity between clinical trials The objective reality is: even if clinical trials are conducted for the same kind of disease and the same kind of treatment, due to the differences between people, due to the differences in experimental design and experimental environment, we cannot find two absolutely identical studies However, there can be similarities of different sizes among the studies When experiments conducted at different times, places or by different researchers have considerable similarity, it is reasonable to combine them In order to achieve this goal, it is necessary to define a classification boundary: to find out some similar studies from a large number of studies, confirm their essential similarity, and distinguish them from other studies that are completely different in nature (too much difference) Heterogeneity refers to the fundamental difference between studies Essentially, different studies should not be combined, and any meta analysis without solving the heterogeneity problem is unscientific Classical heterogeneity test: the pioneers of theoretical and statistical defects meta analysis are well aware of the key to heterogeneity, and they have been trying to find a way to measure heterogeneity since a long time ago Finally, the quantitative evaluation method is developed, which is the so-called "heterogeneity test" represented by Q and I ² However, we have just published a study that proves mathematically that these classic "heterogeneity test" methodologies, which have been used for more than ten years, are defective In other words, evidence-based medicine over the past decade, its seemingly strong foundation, is actually built on the beach Unreliability of Meta-Analysis: mathematical proof when Professor Cochran and his colleagues founded meta-analysis, they found that different clinical trials have too many different attributes in data collection and specific conditions of samples It is mathematically acceptable to prove that data from different studies can be combined for analysis ), it's not that easy To define heterogeneity and evaluate it quantitatively is one of the most important problems in the development of evidence-based medicine Q statistic is a statistic used to evaluate the sum of differences between meta analysis studies The greater the Q value, the greater the heterogeneity between the included studies; otherwise, the smaller the Q value, the smaller the difference between the included studies However, the calculation method of Q implies a dependence on the number of studies When the number of included studies increases gradually, the Q value will "over inflate", resulting in false-positive test results (i.e no matter whether the study is really from a similar sample population, as long as the number of studies increases, the Q value will determine the final result as "from a different population") In order to solve the problem of Q's improper dependence on research quantity, Higgins J, a British evidence-based medicine expert, put forward the idea of subtracting sample number from Q's calculation formula They called this correction method "I ² test" and thought that I ² was more reasonable than Q Higgins wrote a research paper on this method, which was published in the British Medical Journal (BMJ) in 2003 Since then, I ² has been quickly accepted as the standard of heterogeneity measurement by the industry, and has been written into almost all evidence-based medicine textbooks including Cochrane system evaluation manual, which is the method used in almost every meta analysis nowadays However, this research, which was jointly completed by Sichuan people's Hospital and many experts from many famous research units in China, proves that the above classical method is not reliable mathematically We have proved by numerical simulation that when the number of samples increases gradually, the value of I? 2 will increase with it, and its rising trend will be monotonous (see the figure below) This means that as long as the research sample size is large enough, even if there is no heterogeneity at all, the samples from the same population will still be judged to have heterogeneity by I2 test This study also proves that Q also depends on the sample size The heterogeneity test is essentially to guarantee the reliability of meta-analysis, so that it can combine the data from multiple clinical trials, expand the sample size and achieve the necessary effect amount of test hypothesis However, we show that with the increase of the number of studies, the meta analysis combined with clinical trials and increased sample size, the results of heterogeneity test are completely unreliable Ironically, in the face of various contradictions and paradoxical conclusions, modern clinical trials often beg for "larger sample trials" These two irreconcilable contradictions show that meta analysis cannot be self consistent in logic, and there are major defects in the basis of methodology Reflection on the founder of evidence-based medicine based on Meta analysis )It has been pointed out that evidence-based medicine can be called evidence-based medicine only when the best research basis available at present is applied cautiously, accurately and wisely, combined with the personal professional skills of doctors and many years of clinical experience, considering the value and wishes of patients, and combining the three perfectly to formulate the treatment measures of patients However, in the development process of evidence-based medicine, due to the overemphasis on the role of meta analysis and large sample RCT in the establishment of evidence classification system, clinical researchers and medical staff gradually understand the best evidence as: large sample RCT and meta analysis based on such research With the passage of time, more and more RCT and meta analysis have shown a variety of contradictions, making clinicians at a loss Now we have realized that any RCT will face the following unavoidable challenges: the factors that can have a substantial impact on the final investigation goal are far more than people initially expected What randomization tries to control is the difference between patients In essence, individual differences reflect the differences from genome to macro phenotype With the deepening of the understanding of genome, we realize that there are many genes that affect specific clinical phenotypes (such as blood pressure, blood glucose level, tumor type) For example, 651 genes are closely related to wound healing This is only from the perspective of genome Further considering the factors of transcription and expression level, the molecular factors that can affect the clinical outcome will increase in order of magnitude Assuming that the distribution of these factors in the population is random, that is, normal distribution, there are tens of thousands of factors among individuals, in fact, in mathematics, they have formed a super high dimensional space with tens of thousands of dimensions In reality, RCT, which can be included in thousands of samples, has been a very rare large-scale research In the face of the above-mentioned individual differences that are essentially distributed in the ultra-high dimensional space, even if there are thousands of research objects, it is almost impossible to be truly random In this case, the real reason for the "significant" difference in the clinical outcome found in an RCT between groups is probably due to completely uncontrollable bias Therefore, it should be recognized that the methodology of RCT, a research paradigm born half a century ago, is illusory: randomization can balance individual variation, and only ensure that each participant has "equal opportunity" to be assigned to the experimental group and the control group, but it can not guarantee that every factor affecting the outcome of the experiment has "equal opportunity" They were assigned to two groups After all, the philosophy of RCT, cohort study and case-control study is not very different: This is to observe and collect data Mystification, unrestrained worship of large samples, prospective clinical trials, and meta-analysis based on these trials are really superstitious In the face of complex biological phenomena of diseases, it should be recognized that the first generation of evidence-based medicine and its underlying, based on the 18th-19th Century
    This article is an English version of an article which is originally in the Chinese language on echemi.com and is provided for information purposes only. This website makes no representation or warranty of any kind, either expressed or implied, as to the accuracy, completeness ownership or reliability of the article or any translations thereof. If you have any concerns or complaints relating to the article, please send an email, providing a detailed description of the concern or complaint, to service@echemi.com. A staff member will contact you within 5 working days. Once verified, infringing content will be removed immediately.

    Contact Us

    The source of this page with content of products and services is from Internet, which doesn't represent ECHEMI's opinion. If you have any queries, please write to service@echemi.com. It will be replied within 5 days.

    Moreover, if you find any instances of plagiarism from the page, please send email to service@echemi.com with relevant evidence.