Links to Tai Chi Studies & Research?

Discussion on the three big Chinese internals, Yiquan, Bajiquan, Piguazhang and other similar styles.

Re: Links to Tai Chi Studies & Research?

Postby taiwandeutscher on Thu May 01, 2014 6:14 pm

rovere wrote:... As a caution, the vast majority of studies from China seem to be not that reliable in terms of Western standards of research.


+100!
hongdaozi
taiwandeutscher
Wuji
 
Posts: 1623
Joined: Thu Sep 11, 2008 7:48 pm
Location: Qishan, Taiwan, R. o. C.

Re: Links to Tai Chi Studies & Research?

Postby Bob on Thu May 01, 2014 7:56 pm

In general, it would be wise to be cautious regarding all medical research, even those being conducted under Western standards of research.

I have been following John Ioannidis's, MD, research and it has some very dire implications [but not hopeless] for all medical research:

http://www.macleans.ca/society/life/whe ... ioannidis/

When science isn’t science-based: In class with Dr. John Ioannidis

Lessons from one of the world’s most influential scientists

Julia Belluz

January 17, 2014
Last week at the Harvard School of Public Health, Dr. John Ioannidis – a Stanford professor and Science-ish hero – told a room filled with Harvard doctors (and one journalist) that they can’t trust most of the research findings science has to offer. “In science, we are very eager to make big stories, big claims,” he opened his lecture, with a mischievous grin. “The question is: are those claims accurate?”

According to Ioannidis, the answer – at least most of the time – is an unequivocal ‘no.’

A compact man in his 40s with stooped shoulders and thinning brown hair, Ioannidis has made a career researching research – or “meta-research” – examining not just single studies but many studies across fields as diverse as disease prevention, neuroscience and genomics. His boyish nerdiness and good nature belie the thorn in the side of science that he has become. For the last 20 years, he has amassed an internationally regarded body of research about all the ways science isn’t actually science-based. For this work, he’s considered “one of the most influential scientists alive.”

At a time when scientific knowledge is being produced at an unprecedented rate and global spending on life sciences research alone has topped $240 billion US, the need for people like Ioannidis – who can take a step back and examine trends, gaps, biases, waste and flaws – becomes more urgent than ever. If science continually fails at self-correction, Ioannidis is the closest thing this field has to a one-man self-correction machine.

In the Harvard class, he gave students an overview of his work and all the ways research goes off the rails. Here are some highlights:

1) Why every diet supposedly causes cancer:

In one of his studies – appropriately titled “Is everything we eat associated with cancer?” – Ioannidis and a co-author randomly selected 50 ingredients from recipes in the The Boston Cooking-School Cook Book. They then looked at whether those ingredients were associated with an increased or decreased risk of cancer. At least one study was identified for 40 of the ingredients – from bacon and bread to sherry and sugar – and most of the claims made in the studies contradicted each other or were based on weak evidence. “Most of the ingredients had results on both sides, positive and negative,” he said, making the point that many studies about cancer and nutrition are poorly designed. There were studies to support just about every claim on the popular topic – and many of them are too good to be true. “With one more serving of tomatoes,” he told his class with a smirk, “half the burden of cancer in the world would go away.”

2) Why most published research findings are false:

For Ioannidis, the key reason for this exaggeration and misrepresentation in research can be summed up in one word: bias. “This can be conscious, subconscious, or unconscious,” he said of these deviations from the truth – beyond chance or error – that pervert science. His favourite offender is ‘publication bias,’ which gives a falsely exaggerated impression of the science on a subject because not all studies that get conducted get published and the ones that do tend to have extreme results. It’s like doing a bunch of tests to find out whether your new vacuum works, and even though most tests fail, only reporting the one time the vacuum turned on.

Ioannidis is well known for taking on the entire research enterprise in an essay entitled ‘Why Most Published Research Findings are False.’ In the paper, he described how a combination of uncertainty (no scientific finding is ever final) and publication bias creates a maelstrom of spurious findings that don’t hold up to scrutiny over the long-term.

3) Why you need to be cautious about early studies with big claims:

For another paper on the twists and turns in research, Ioannidis examined the reliability of findings in highly-cited original studies, focusing in particular on those which had been contradicted by later, more rigorous research. These influential studies were not about cold and abstract issues; many focused on the very questions that we all grapple with every day, such as whether to take supplements or not, and whether common medications – like aspirin for blood pressure – really work.

Here, he concluded, “Contradicted and potentially exaggerated findings are not uncommon in the most visible and most influential original clinical research.” In other words, splashy early studies with big effects were often found to be exaggerated or completely wrong. He also found that the original research continued to be cited, sometimes with complete silence on the more recent, contradictory evidence. For example, an early observational study revealed a supposed link between vitamin A supplementation and breast cancer, only to be overturned by a later, much higher-quality randomized controlled trial – yet the debunked observational study remained more highly cited and influential.

In a study, Ioannidis looked at six highly-cited journals between 1979 and 1983, combing for papers in which researchers claimed their basic scientific findings were going to lead to useful treatments. Out of 25,190 studies he identified, 101 made such claims. Yet, the vast majority of these studies were never followed up with randomized controlled trials to test those claims. Of the 27 that did, only five resulted in technologies that were licensed for clinical use in 2003 and only one has been widely used for the purposes for which it was licensed. This means the chances that someone promising a breakthrough and actually delivering one are about as slim as the chances of winning the lottery.

4) How to make science less science-ish:

At the end of the course, Ioannidis shared a few ideas about how to improve the status quo in science. He suggested first that researchers need to learn to live with small effects in their studies. “Having worked in different fields, most of the effects that are of interest are small,” he said. Most effects of a big magnitude – like the link between smoking and lung cancer – have already been recognized. To reduce the signal-to-noise ratio, he said, scientists need to design their studies accounting for the fact that the effect sizes they are chasing may be tiny.

He also suggested that even if studies aren’t going to be replicated, researchers should at least try repeating their findings by getting an independent investigator to vet their raw data sets. Other fixes for science, which Ioannidis outlined in a new Lancet series on reducing inefficiency in research, include revamping the reward system for research and making data publicly available.

5) Why science, if flawed, is still the best alternative:

At the end of his week-long visit to Harvard, Science-ish asked Ioannidis whether he ever tired of poking holes in science, whether all his work has caused him to lose faith in the scientific process. With wide eyes, he exclaimed, “I remain as enthusiastic about science as ever!” He went on to describe all the benefits of science, why it is “the best thing that can happen to humans”: the value of rational thinking, of evidence over ideology, religious belief and dogma. “We have effective treatments and interventions and useful tests we can apply. We have both theoretical and empirical evidence that science is beneficial to humans and it’s a wonderful construct of thinking. . . Science is beautiful because it’s falsifiable.”

“There’s plenty of room to apply the very same (scientific) tools to the way science is done,” he added. “The question is: can we get there faster and more efficiently without wasting effort?”

Science-ish is a joint project of Maclean’s, the Medical Post and the McMaster Health Forum. Julia Belluz is senior editor at the Medical Post. She is currently on a Knight Science Journalism Fellowship at the Massachusetts Institute of Technology. Reach her at [email protected] or on Twitter @juliaoftoronto

http://www.mayoclinicproceedings.org/ar ... 25-6196(13)00403-5/abstract

How Many Contemporary Medical Practices Are Worse Than Doing Nothing or Doing Less?

John P.A. Ioannidis, MD, DScemail
Stanford Prevention Research Center, Department of Medicine, Department of Health Research and Policy, Stanford University School of Medicine, Stanford, CA
Department of Statistics, Stanford University School of Humanities and Sciences, Stanford, CA

How many contemporary medical practices are not any better than or are worse than doing nothing or doing something else that is simpler or less expensive? This is an important question, given the negative repercussions for patients and the health care system of continuing to endorse futile, inefficient, expensive, or harmful interventions, tests, or management strategies. In this issue of Mayo Clinic Proceedings, Prasad et al1

describe the frequency and spectrum of medical reversals determined from a review of all the articles published over a decade (2001-2010) in New England Journal of Medicine (NEJM). Their work extends a previous effort2

that had focused on data from a single year and had suggested that almost half of the established medical practices that are tested are found to be no better than a less expensive, simpler, or easier therapy or approach. The results from the current larger sample of articles1

are consistent with the earlier estimates: 27% of the original articles relevant to medical practices published in NEJM over this decade pertained to testing established practices. Among them, reversal and reaffirmation studies were approximately equally common (40.2% vs 38%). About two-thirds of the medical reversals were recommended on the basis of randomized trials. Even though no effort was made to evaluate systematically all evidence on the same topic (eg, meta-analyses including all studies published before and after the specific NEJM articles), the proportion of medical reversals seems alarmingly high. At a minimum, it poses major questions about the validity and clinical utility of a sizeable portion of everyday medical care.

Are these figures representative of the medical literature and evidence base at large? The sample assembled by Prasad et al is highly impressive, but it accounts for less than 1% of all randomized trials published in the same decade (an estimated >10,000 per year) and an even more infinitesimal portion of other types of study designs. If one could extrapolate from this sample by proportion, perhaps there have been several tens of thousands of medical reversal studies across all 23 million articles entered to date in PubMed. One has to be cautious with extrapolations, however. New England Journal of Medicine is clearly different from other journals in many ways besides having the highest impact factor among the list of 155 general and internal medicine journals.3

It is widely read, and it has high visibility and impact both on the mass media and on medical practitioners. In this regard, the collection of 146 medical reversals reviewed by Prasad et al is a compendium of widely known, visible examples, and thus it can make excellent reading for medical practitioners and researchers, teachers, and trainees. At the same time, this characteristic is also a disadvantage: the articles published by NEJM are a highly selected sample, probably susceptible to publication and selective outcome reporting bias. There is substantial empirical evidence that the effect sizes of randomized trials published in NEJM, Lancet, or JAMA (the top 3 general and internal medicine journals in terms of impact factor3

) are markedly inflated, in particular for small trials4; conversely, the effect sizes for large trials are similar to those seen in large trials on the same topic in other journals.4

The interpretation of the results in NEJM is also likely to be more exaggerated compared with other journals because authors may feel pressured to claim that the results are impressive in order to get their work published in such a competitive venue.5

Finally, when the quantitative data on effect sizes are examined, studies published in NEJM and other major journals have higher informativity (information gain or change in entropy),6

ie, their results do change previous evidence more than the change incurred by the results of studies published elsewhere.

On the basis of these considerations, the frequency of medical reversals published in NEJM may be somewhat higher than what might be seen in publications in other journals. However, there are also some other counterbalancing forces that could cause bias in the opposite direction. For example, evaluations published in NEJM are likely to focus on commonly used, established medical practices. Such commonly used practices are likely to have had at least some previous evidence generated in the past supporting their use. Conversely, established interventions that are more narrowly applied and specialized (eg, those for which randomized trials might be published in small-circulation, highly specialized journals) may have been originally endorsed with even more sparse and worse-quality evidence, or even no evidence at all.

Other empirical approaches may also offer some insight about how commonly useless or even harmful treatments are endorsed. The Cochrane Database of Systematic Reviews has assembled considerable current medical evidence from clinical trials on diverse interventions. An empirical evaluation of Cochrane reviews in 2004 showed that most (47.8%) concluded that there is insufficient evidence to endorse the examined interventions.7

A repeated evaluation in 2011 showed that this trend has not changed, with the percentage of insufficient evidence remaining as high as 45%.8

Often, non-Cochrane reviews tend to have more positive conclusions about the assessed interventions, but it is unclear whether this finding reflects genuine superiority of the assessed interventions or bias in the interpretation of the results.9

Although a substantial proportion of interventions are clearly harmful or inferior to others, many are still being used because of reluctance or resistance to abandoning them.10

Some are even widely used despite the poor evidence, as Prasad et al1

eagerly highlight with several examples. Moreover, different medical specialties may vary in their lack of evidence—eg, primary care, surgery, and dermatology interventions more frequently lack evidence to support their use compared with internal medicine interventions.11

Most new interventions that are successfully introduced into medical care have small effects that translate to modest, incremental benefits.12

Empirical evaluations have suggested that well-validated large benefits for measurable outcomes such as mortality are uncommon in medicine.13

Under these circumstances, even subtle changes in the composition and spectrum of the treated population over time, emergence of previously unrecognized toxicities, or a relatively disadvantageous cost can easily tip the evidence balance against the use of these interventions. Moreover, the introduction of interventions with limited or no evidence of benefit continues at fast pace even in specialties that have a strong tradition of evidence-based methods. For example, in almost half (48%) of the recommendations in major cardiology guidelines, the level of evidence is grade C, ie, limited evidence and expert opinion have a highly influential presence.14

Once we divert beyond traditional treatments (eg, drugs or devices) to diagnostic tools, prognostic markers, health systems, and other health care measures, randomized trials are a rarity.15

For example, it has been estimated that, on average, there are only 37 publications per year of randomized trials assessing the effectiveness of diagnostic tests.15

Some modern technologies (eg, “omics”) promise to introduce new tools into medical management at such a high pace that many investigators are wary of even thinking about the possibility of randomized testing. Despite better laboratory science, fascinating technology, and theoretically mature designs after 65 years of randomized trials, ineffective, harmful, expensive medical practices are being introduced more frequently now than at any other time in the history of medicine. Under the current mode of evidence collection, most of these new practices may never be challenged.

The data collected by Prasad et al1

offer some hints about how this dreadful scenario might be aborted. The 146 medical reversals that they have assembled are, in a sense, examples of success stories that can inspire the astute clinician and clinical investigator to challenge the status quo and realize that doing less is more.16

It is not with irony that I call these disasters “success stories.” If we can learn from them, these seemingly disappointing results may be extremely helpful in curtailing harms to patients and cost to the health care system. Although it is important to promote effective practices (“positive success stories”), it is also important to promote and disseminate knowledge about ineffective practices that should be reversed and abandoned. Also, research is needed to find the most efficient ways of applying the knowledge learned from these “negative” studies. Does it suffice to compile lists of practices that should be abandoned?10 What types of educational approaches and reinforcement could enhance their abandonment? What are the obstacles (commercial, professional, system inertia, or other) that hinder this disimplementation step and how can they be best overcome? Are there some incentives that we can offer to practitioners and health systems to apply this “negative” knowledge toward simplifying and streamlining their practices?

Some of the messaging may require inclusion in guidelines, given the widespread attention that these documents gain, particularly when issued by authoritative individuals or groups, and their capacity to affect clinical practice. Should we require generally higher levels of evidence before practice guidelines are recommended? Moreover, if and when practice guidelines are discredited or overturned by additional information, should notification of practitioners and the public not be undertaken with the same, if not more, vigor as when the practices were first recommended?

Finally, are there incentives and anything else we can do to promote testing of seemingly established practices and identification of more practices that need to be abandoned? Obviously, such an undertaking will require commitment to a rigorous clinical research agenda in a time of restricted budgets. However, it is clear that carefully designed trials on expensive practices may have a very favorable value of information, and they would be excellent investments toward curtailing the irrational cost of ineffective health care.



References
1.Prasad, V., Vandross, A., Toomey, C. et al. A decade of reversal: an analysis of 146 contradicted medical practices. Mayo Clin Proc. 2013; 88: 790–798View in Article
| PubMed
| Scopus (16)

2.Prasad, V., Gall, V., and Cifu, A. The frequency of medical reversal. Arch Intern Med. 2011; 171: 1675–1676View in Article
| CrossRef
| PubMed
| Scopus (34)

3.ISI Web of Science. Journal Citation Reports. Accessed May 9, 2013.View in Article

4.Siontis, K.C., Evangelou, E., and Ioannidis, J.P. Magnitude of effects in clinical trials published in high-impact general medical journals. Int J Epidemiol. 2011; 40: 1280–1291View in Article
| CrossRef
| PubMed
| Scopus (9)

5.Young, N.S., Ioannidis, J.P., and Al-Ubaydli, O. Why current publication practices may distort science. PLoS Med. 2008; 5: e201View in Article
| CrossRef
| PubMed
| Scopus (128)

6.Evangelou, E., Siontis, K.C., Pfeiffer, T., and Ioannidis, J.P. Perceived information gain from randomized trials correlates with publication in high-impact factor journals. J Clin Epidemiol. 2012; 65: 1274–1281View in Article
| PubMed
| Scopus (5)

7.El Dib, R.P., Atallah, A.N., and Andriolo, R.B. Mapping the Cochrane evidence for decision making in health care. J Eval Clin Pract. 2007; 13: 689–692View in Article
| CrossRef
| PubMed
| Scopus (25)

8.Villas Boas PJ, Spagnuolo RS, Kamegasawa A, et al. Systematic reviews showed insufficient evidence for clinical practice in 2004: what about in 2011? the next appeal for the evidence-based medicine age [published online ahead of print July 3, 2012]. J Eval Clin Pract. http://dx.doi.org/10.1111/j.1365-2753.2012.01877.x.View in Article

9.Tricco, A.C., Tetzlaff, J., Pham, B., Brehaut, J., and Moher, D. Non-Cochrane vs. Cochrane reviews were twice as likely to have positive conclusion statements: cross-sectional study. J Clin Epidemiol. 2009; 62: 380–386.e1View in Article
| PubMed
| Scopus (23)

10.Prasad, V., Cifu, A., and Ioannidis, J.P. Reversals of established medical practices: evidence to abandon ship. JAMA. 2012; 307: 37–38View in Article
| CrossRef
| PubMed
| Scopus (51)

11.Matzen, P. How evidence-based is medicine? A systematic literature review. ([in Danish]) Ugeskr Laeger. 2003; 165: 1431–1435View in Article
| PubMed

12.Djulbegovic, B., Kumar, A., Glasziou, P.P. et al. New treatments compared to established treatments in randomized trials. Cochrane Database Syst Rev. 2012; 10: MR000024View in Article
| PubMed

13.Pereira, T.V., Horwitz, R.I., and Ioannidis, J.P. Empirical evaluation of very large treatment effects of medical interventions. JAMA. 2012; 308: 1676–1684View in Article
| CrossRef
| PubMed
| Scopus (18)

14.Tricoci, P., Allen, J.M., Kramer, J.M., Califf, R.M., and Smith, S.C. Jr. Scientific evidence underlying the ACC/AHA clinical practice guidelines. ([published correction appears in JAMA. 2009;301(15):1544]) JAMA. 2009; 301: 831–841View in Article
| CrossRef
| PubMed
| Scopus (355)

15.Ferrante di Ruffano, L., Davenport, C., Eisinga, A., Hyde, C., and Deeks, J.J. A capture-recapture analysis demonstrated that randomized controlled trials evaluating the impact of diagnostic tests on patient outcomes are rare. J Clin Epidemiol. 2012; 65: 282–287View in Article
| PubMed
| Scopus (7)

16.Grady, D. and Redberg, R.F. Less is more: how less health care can result in better health. Arch Intern Med. 2010; 170: 749–750View in Article
| CrossRef
| PubMed
| Scopus (34)
Last edited by Bob on Thu May 01, 2014 7:57 pm, edited 2 times in total.
Bob
Great Old One
 
Posts: 3747
Joined: Tue May 13, 2008 4:28 am
Location: Akron, Ohio

Re: Links to Tai Chi Studies & Research?

Postby rovere on Fri May 02, 2014 8:08 am

Agreed Bob. Anyone who has had to grind through graduate courses in research methodologies will become discouraged with the process and conclusions made in clinical studies and the poor interpretation of the data. Researchers also lose sight of the difference between efficacy & effectiveness in their drive for publication. (We have for example cured cancer in rats several times.) In terms of tai chi and health there are numerous shortcomings and often contradictory results hence the notion of "promising but needs more research". Bias and confounders that Ioannidis talks about are something good researchers are aware of and a good study will try to avoid these pitfalls. In the tai chi studies Meta-analysis is often difficult to do given a lack of baseline standard from study to study & misinterpretation of effect modification. Also in terms of Tai Chi, the studies tend to be quasi-experimental and open to selection bias - that's the nature of the beast.

The academic shortcomings of the mainland Chinese is notorious - 1,500+ professors in the last year have lost their teaching positions because of forged degrees & the tai chi studies simply lack the academic rigor (e.g., manufactured data) needed for good research - much worse than what you cited in your post.

regards,

Dennis
rovere
Santi
 
Posts: 13
Joined: Thu Jun 19, 2008 1:28 pm

Re: Links to Tai Chi Studies & Research?

Postby LaoDan on Fri May 02, 2014 11:55 am

As in everything else, scientific study has difficulties. One thing that is not mentioned by Bob in his post is the ‘bell curve’:

Image


Scientists try to use statistics to help make sense of the natural variability of the things being studied; they look for ‘statistical significance’ in order to determine relevance. The design of experiments (including using proper controls) is important in reducing ‘noise’ in the study in order to see significance in what otherwise may be marginal effects, but the often small (but still significant) differences often seen in scientific experiments get even more complicated when attempting to put them into application in medical treatments for individuals.

In order to test specific items in a complex system, researchers need to isolate the desired item from the overall system. But typically, they also later need to show that their discoveries are still applicable to complex systems, which is why animal studies (in mice, etc. and human trials) are so important. So there is always a tradeoff, either isolating something in order to reduce and control the variables, or seeing how that thing operates in a complex system with numerous interconnected and compensatory mechanisms.

If you consider that each variable has its own bell curve, it is rather remarkable that we obtain anything useful out of medical research! Everyone is unique (even genetic twins have differences in their responses to treatments) so that each person will respond differently to treatments. All the ‘side effects’ of drugs are really undesired effects; they are affecting other variables in the complex system other than the isolated item that the experiments and treatments were designed for.
LaoDan
Wuji
 
Posts: 624
Joined: Mon May 17, 2010 11:51 am

Re: Links to Tai Chi Studies & Research?

Postby Bob on Fri May 02, 2014 12:39 pm

As my nonparametric statistics professor once told us during a lecture, its dangerous to presuppose that the population parameters follow a normal distribution so I might be cautious in presupposing that every variable has its own bell curve and more to your point that many of the variable may have population parameters with unknown distributions.

If you go back to my earliest post, I don't think we will every have a way to make definitive statements regarding things like Taiji research--I guess it will be like a court room and you gotta weight all the evidence.

From another blog site a retired epidemiologist:

"Even the best randomized trials have limits, among them is the simple fact that randomization does not guarantee equal distribution of all the potential variables that could explain an outcome which is why replication is essential.

Simply, randomization means that the risk of differences caused by other factors can be estimated if one “assumes” a particular probability distribution. And, of course, ones assume probability distribution may not be right.

The Cochrane reliance on randomized clinical trials is a problem as other research designs can arrive at valid conclusions. For instance, Kenneth J. Rothman has shown that one can draw equally valid conclusions from well-done case-control studies (Kenneth J. Rothman & Sander Greenland, “Modern Epidemiology, 2nd Ed.”, Lippincott Williams & Wilkins, 1998).

In addition, randomized trials usually take quite some time and are very expensive, so we have come up with other approaches to base decisions on, including systematic reviews and evidence-based clinical practice guidelines.

Their strengths are that the “better” ones include the search criteria, a complete references list, often tables summarizing the key variables from each study, and a clear decision process.

If any of the steps are omitted or unclear then we have a problem; but this problem exists for randomized clinical trials, case-control studies, observational studies, etc. I have had occasion to wonder how research papers I have read arrived at a conclusion since I couldn’t follow the methodology.

A well-done paper, whether direct research or a review, should allow the reader to clearly follow what was done, thus allowing for replication and/or criticism. The fact that sometimes people go on talk shows or in some other format draw unwarranted conclusions does not change the basics. "

http://www.sciencebasedmedicine.org/coc ... incomments
Bob
Great Old One
 
Posts: 3747
Joined: Tue May 13, 2008 4:28 am
Location: Akron, Ohio

Re: Links to Tai Chi Studies & Research?

Postby Steve James on Fri May 02, 2014 12:40 pm

Let's agree that, in Chinese traditional culture, TCC has been considered beneficial (for health). People can say that, but the real question is "why." Is it because of the specific postures, done in a particular order? Then, why those postures, and why that order? Would it ever be possible, given the variations among the different styles, to say that a posture/movement must be done in a specific way, at a specific speed, in order to get "X" results? Or, are the specific movements or speed irrelevant, and that the benefit comes from something else?

I tend to believe that most research done on TCC start out from the premise that TCC has health benefits. That's not always a bad thing: the goal can be to find out the exact mechanisms that produce the benefit. However, the question why similar studies are not done for the other imas and cmas in general is begging to be asked.
"A man is rich when he has time and freewill. How he chooses to invest both will determine the return on his investment."
User avatar
Steve James
Great Old One
 
Posts: 21212
Joined: Tue May 13, 2008 8:20 am

Re: Links to Tai Chi Studies & Research?

Postby Bob on Fri May 02, 2014 1:47 pm

Extrapolating from Yang's Taiji martial arts foundation, applying taiji for the cultivation and development of overall health was never meant to be a employed in a reductionist 'silver bullet" treatment for chronic diseases. [not in response to or a challenge to Steve's remarks, just my general thoughts and how I resolved my own conflicts over how the research validates the practice of taiji]

It should be one part of a general lifestyle (I like the term Yangsheng) and the scientific trails of Taiji's effectiveness probably fails to capture it as a part of a synergistic lifestyle healing process.

In one sense RCT are treating taiji as more or less a treatment drug and I wouldn't take any of the findings (positive or negative) as definitive.

For me it would be one piece of information in the construction of a healthy lifestyle and the subjective sense of wellness would be equally as important as any physiological effects.

Taiji is a wonderful system of lifestyle exercise but it is not magic and the practice itself can't bring about miraculous healings.
Last edited by Bob on Sat May 03, 2014 4:04 am, edited 1 time in total.
Bob
Great Old One
 
Posts: 3747
Joined: Tue May 13, 2008 4:28 am
Location: Akron, Ohio

Re: Links to Tai Chi Studies & Research?

Postby yeniseri on Fri May 02, 2014 10:01 pm

A comprehensive Cochrane Review of TCM Therapies

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2856612/
When fascism comes to US America, It will be wrapped in the US flag and waving a cross. An astute patriot
yeniseri
Wuji
 
Posts: 3803
Joined: Sat Dec 12, 2009 1:49 pm
Location: USA

Re: Links to Tai Chi Studies & Research?

Postby Bob on Sat May 03, 2014 3:52 am

Nice article that pretty much covers my thoughts on TCM, taiji and all things labeled "alternative medicine".

I am glad Cleveland Clinic took on that option [the State of Ohio recently passed legislation permitting the practice of Asian Medicine but under the close scrutiny of allopathic physicians].

I also taught taiji at the University for a number of years [not currently but years back] and always preferred students who took the class that were motivated less by any thoughts of healing but more by the cultural, aesthetic and/or curiosity mode of the art. [I had a young student tell me after two sessions that she now could see my qi--a bit too much for me but did not take on the challenge of questioning what she saw--my response was, "That's interesting but we need to practice the movements now!"]

I actually had the head of a local medical school [retired] take the course--his wife seemed to love it but he had a hard time: mainly with the lack of self-coordination and self-embarrassment of not being able to master the movements on a first try basis--too extrinsically oriented and achievement motivated and couldn't catch on that he had retired from the "rat race" of academia LOL:

http://www.huffingtonpost.com/david-kat ... ealth+News

David Katz, M.D., Yale Prevention Research Center

A Holistic View of Evidence-Based Medicine: of Horse, Cart and Whip

Posted: 05/02/2014 2:17 pm EDT Updated: 05/02/2014 2:59 pm EDT
Print Article

On Tuesday of this week (4/29/14), I was on the Katie Couric Show to discuss Integrative Medicine. Somewhat ironically, I returned from Manhattan that same day to a waiting email from a colleague, forwarding me a rather excoriating critique of integrative medicine on The Health Care Blog, and asking me for my opinion.

The juxtaposition, it turns out, was something other than happenstance. The Cleveland Clinic has recently introduced the use of herbal medicines as an option for its patients, generating considerable media attention. Some of it, as in the case of the Katie Couric Show, is of the kinder, gentler variety. Some, like The Health Care Blog -- is rather less so. Which is the right response?

One might argue, from the perspective of evidence based medicine, that harsh treatment is warranted for everything operating under the banner of "alternative" medicine, or any of the nomenclature alternative to "alternative" -- such as complementary, holistic, traditional, or integrative. One might argue, conversely, for a warm embrace from the perspective of patient-centered care, in which patient preference is a primary driver.

I tend to argue both ways, and land in the middle. I'll elaborate.

First, I am a card-carrying member (well, I would be if they issued cards) of the evidence-based medicine club. I am a conventionally trained Internist, and run a federally funded clinical research laboratory. I have taught biostatistics, evidence-based medicine, and clinical epidemiology to Yale medical students over a span of nearly a decade. I have authored a textbook on evidence-based medicine.

But on the other hand, I practice Integrative Medicine, and have done so for nearly 15 years. And I represent Yale on the steering committee of the Consortium of Academic Health Centers for Integrative Medicine.

Odd as it may seem, I consider these platforms entirely compatible. I did not go into Integrative Medicine because I believe "natural" is reliably better or safer than "scientific." I respect the often considerable prowess of modern medical technology and pharmaceuticals. And, frankly, I have never much cared whether a therapy derived from a tree leaf, or a test tube. I have cared about whether it was safe, and whether it was effective.

As an Internist taking care of patients for many years, one thing was painfully clear: I could not make everyone better. And while this deficiency might have been my personal shortcoming, it was much more than that. Modern medicine couldn't make everyone better. We tended to fall down particularly when it came to treating chronic pain, or chronic fatigue. We tended to stumble rather badly over any condition with "syndrome" in the title (as opposed to a "disease," a "syndrome" is a description of symptoms generally lacking a clear explanation).

Integrative Medicine -- a fusion of conventional and "alternative" treatments -- provided patients access to a wider array of options. So, for instance, if medication was ineffective for anxiety or produced intolerable side effects, options such as meditation, biofeedback, or yoga might be explored. If analgesics or anti-inflammatories failed to alleviate joint pain or produced side effects, such options as acupuncture or massage could be explored.

The array of potential options extends, of course, to herbal remedies and nutriceuticals as well -- the apparent focus at the Cleveland Clinic. And, more controversially, it potentially extends to modalities that conventionally trained clinicians find implausible, such as homeopathy or energy therapies. I won't get too deep into such weeds today, but have done so before.

Here are a few key considerations from my perspective.

1. Evidence is not a reliable differentiator of conventional and alternative medicine. By the standards that now prevail, more than 50 percent of conventional medical practice is not truly "evidence based." Some years ago, colleagues and I were charged in a CDC grant to chart the evidence related to complementary and alternative medicine. We would up inventing a technique called "evidence mapping," since adopted by the World Health Organization and applied to an international traumatic brain injury program. Our finding was that in the realm of alternative medicine, some practices are rather well studied, some are understudied and some unstudied. Much like conventional medicine, in other words.

2. To the extent that evidence does differentiate conventional and alternative medicine, it's often because -- in the pursuit of evidence -- cart and horse routinely swap positions and money cracks the whip. If the horse pulled the cart, then what gets studied would be what is needed, and what looks promising. But in our world, what gets studied often begins with what can be patented. It now costs nearly a $billion to bring a new FDA approved drug to market. The only rationale for spending that much is the likelihood making back much more -- and that only occurs when the product in question is exclusive and propriety, i.e., patented. There is a classic demonstration of the power of this influence.

More than a decade ago, a study of about 50 people followed for about three months was used to "prove" that coenzyme Q 10 was ineffective for treating congestive heart failure. At about the same time, a study of nearly two thousand people followed for years proved that the proprietary drug, carvedilol, was effective. The difference at the time was not really evidence -- it was money. A great deal more money was spent on the carvedilol trial -- because no one can patent coenzyme Q 10.

More than a decade later, we now have evidence that coenzyme Q 10, when added to standard therapy for heart failure, can reduce mortality by as much as 50 percent. That is a stunning effect for something that has been "alternative" medicine all this time, and was declared useless by the conventional medical establishment. Unless we are willing to practice money-based medicine, or patent-based medicine, we are obligated to recognize that the playing field for generating evidence is not level. It is tilted steeply in favor of patent holders.

3. Evidence is not black or white. It comes in shades of gray. Clinical decisions are easy if a treatment is known to be dangerous and ineffective, or known to be safe and uniquely effective. But what if a given patient has tried all the remedies best supported by randomized clinical trials, but has "stubbornly" refused to behave as the textbooks advise and failed to get better? Or what if a patient just can't tolerate the treatments with the most underlying evidence? One option is to tell such a patient: See ya! But I think that is an abdication of the oaths we physicians took. When the going gets tough, we are most obligated to take our patients by the hand, not wave goodbye.

To address this very scenario, colleagues and I have developed and published a construct that examines therapeutic options across five domains: safety; efficacy; quality of evidence; therapeutic alternatives; and patient preference. If a patient is otherwise running out of options and is in need, trying something that is likely to be safe and possibly effective -- makes sense. If there is something that is likely to be safer and more effective, then that should be used first.

But by recognizing and prioritizing the obligation to blend responsible use of evidence with responsiveness to the needs of patients that often go on when the results of large randomized, clinical trials have run out -- we can wind up, inadvertently even, in the realm of Integrative Medicine. That's how I got here. The needs of my patients led and I followed. And yes, the wider array of treatment options I can offer working side by side with my naturopathic colleagues absolutely does mean I have been able to help patients I otherwise could not.

Integrative Medicine should not involve a choice between responsible use of evidence and responsiveness to the needs of all patients. It should combine the two. We should do the best we can with the evidence we have, but recognize that high quality evidence may start to dwindle before our patient's symptoms start to resolve. We should resolve to confront this challenge with our patients, not leave them to fend for themselves.

The belief that treatments are intrinsically better just because they are "natural" is fatuous and misguided. Smallpox, botulinum toxin and rattlesnake venom are natural. Nature is not benevolent.

But the belief that conventional medicine is reliably evidence-based is equally fatuous. Much of what we do is simply tradition. And much of the evidence we get is more about money than other imperatives. Often in the world of alternative medicine, the problem is not evidence of absent effects -- but a relative absence of evidence, in turn engendered by an absence of patents and financial incentives. The history of coenzyme Q10 is a precautionary tale if ever there was one.

Integrative Medicine is not an invitation to supplant evidence with wishful thinking. It is an invitation to a wider array of treatment options, and the prospect of effectively addressing patient need more of the time. Realizing such potential benefits -- at the Cleveland Clinic, or anywhere else -- requires both open mindedness and careful skepticism. It calls for a holistic view of the full array of therapeutic options, and the recognition that both conventional and alternative medicine are home to baby and bathwater. Differentiating can be hard -- and we and our patients should be confronting that challenge together.

-fin

Dr. David L. Katz directs the Integrative Medicine Center at Griffin Hospital in Derby, CT; and the Yale University Prevention Research Center, also housed at Griffin Hospital. He is President of the American College of Lifestyle Medicine and author of Disease Proof.
Last edited by Bob on Sat May 03, 2014 4:07 am, edited 1 time in total.
Bob
Great Old One
 
Posts: 3747
Joined: Tue May 13, 2008 4:28 am
Location: Akron, Ohio

Re: Links to Tai Chi Studies & Research?

Postby Bob on Sat May 03, 2014 4:20 am

FWIW, his article on Plausibility was one of the more useful pieces I have read regarding the war between alternative medicine and allopathic/biomedicine.

http://www.davidkatzmd.com/admin/archiv ... bility.pdf

Descartes’ Carton–On Plausibility
David L. Katz, MD, MPH

[concluding note]
I am writing, you are reading, and we are thinking—and therefore we ostensibly are.

But that we are is a veritable assault on plausibility. What we think we are is merely what we perceive ourselves to be.

In a vast sea of wonder, where empty space is only interrupted by matter made of still emptier spaces, the most implausible of all things may be that we exist to ponder it.

But since we are, we have abundant cause to think, most humbly, about the plausibility of all else.
Last edited by Bob on Sat May 03, 2014 4:21 am, edited 1 time in total.
Bob
Great Old One
 
Posts: 3747
Joined: Tue May 13, 2008 4:28 am
Location: Akron, Ohio

Re: Links to Tai Chi Studies & Research?

Postby yeniseri on Sat May 03, 2014 10:08 am

I did see the programme on Katie Couric and it was well presented.
Alternative medicine is definitely cost effective per individualized treatment protocols. The presenters also noted the need for drug medications and herbal medicine awareness in combining due to the potential for misuse and serious adverse events. As was said, just because it is natural does not mean there will be no problems associated with alternative medicine (specifically herbal medicine presecription)
When fascism comes to US America, It will be wrapped in the US flag and waving a cross. An astute patriot
yeniseri
Wuji
 
Posts: 3803
Joined: Sat Dec 12, 2009 1:49 pm
Location: USA

Previous

Return to Xingyiquan - Baguazhang - Taijiquan

Who is online

Users browsing this forum: No registered users and 54 guests