Don’t just rub it better, cross it over – the analgesic effect of crossing your arms.

The gate control theory gave us all a theoretical rationale for ‘rubbing it better’ – activation of Aβ fibres and subsequent ‘closing of the gate’ in the dorsal horn. Well, there is a new paper just out in Pain,[1] that raises the possibility of another quick and easy analgesic strategy – crossing your arms. My mum reckons that her mum was onto that decades ago – perhaps we have uncovered a mechanism behind an age-old trick. Regardless, here is what we did: There were two experiments – the first was a behavioural experiment, which means that participants just reported what happened for them in response to a series of stimuli. The second was a physiological experiment in which electrical responses to the stimuli were recorded through electrodes placed over the scalp. The experimental design was more or less the same for both – two postural conditions: arms in front and uncrossed versus arms in front and crossed over one another – and several stimulus conditions – noxious laser stimuli at different intensities and non-noxious electrical stimuli at different intensities.

Are you thinking to yourself, ‘Why do this experiment?’ Well, the theoretical background to it lies in a bunch of research that very strongly suggests that, when we determine the location of a somatosensory stimulus, we integrate its somatotopically-organised coordinates with its spatially-organised coordinates. That is, the brain allocates a location on the surface of the skin, using somatotopically organized maps such as that represented in primary sensory cortex. This information is then integrated with the location of the stimulus in space, relative to the body midline (to learn more about this stuff, buy Charlie Spence’s book. Actually, buying it will not help you learn more about this stuff – you would have to both buy it and read it). The critical bit is this: it is this integration of somatotopic and spatial frames of reference that is essential for awareness of the stimulus, for the actual production of a conscious experience. This raises the tantalizing possibility that if we put the somatotopic frame of reference into conflict with the spatial frame of reference, we might be able to impair the brain’s ability to create the perception. So that is what we did. By crossing the arms.

In case you hadn’t noticed, your left hand spends almost all of its time on the left side of your body and your right hand spends its time on the right side of your body. If our theory is correct, then crossing the arms presents an unusual and conflicting situation, because now the right hand is on the left and the left is on the right. Right?

Well, in the first experiment, our prediction was supported: exactly the same stimuli in each domain – noxious or non-noxious – evoked less intense perceptual experiences if the arms were crossed than if they were not. The difference was pretty small, but as we argued to the journal that published it, PAIN, the important thing here is that an effect occurs at all. This is the beginning, not the clinical application end. The second experiment investigated the brain’s processing of information. If it is indeed a disruption of the integration of somatotopic and spatial frames of reference that underpins the effect, rather than a change in the sensory signal arriving at the brain, then we should see a change in a late stage of the brain-evoked signal, not an early stage (a shift in the early stage would be more consistent with a change in S1 activation). Pleasingly, our prediction was again supported – there was a reduction in the late wave (N2 P2) but not the early wave (N1).

Who cares? Well, we do, obviously. We have targeted the very process by which pain emerges from the brain and showed that it is possible to reduce pain, simply by conflicting two frames of reference. This study opens up a new line of research to pursue better ways to do this. In the meantime, it seems that, when we hurt our hand, we should not only rub it better, but also cross it over.

About Lorimer Moseley

grey Don’t just rub it better, cross it over – the analgesic effect of crossing your arms. Lorimer is NHMRC Senior Research Fellow with twenty years clinical experience working with people in pain. After spending some time as a Nuffield Medical Research Fellow at Oxford University he returned to Australia in 2009 to take up an NHMRC Senior Research Fellowship at Neuroscience Research Australia (NeuRA). In 2011, he was appointed Professor of Clinical Neurosciences & the Inaugural Chair in Physiotherapy at the University of South Australia, Adelaide. He runs the Body in Mind research groups. He is the only Clinical Scientist to have knocked over a water tank tower in Outback Australia.

Reference

grey Don’t just rub it better, cross it over – the analgesic effect of crossing your arms.

[1]Gallace A, Torta DM, Moseley GL, & Iannetti GD (2011). The analgesic effect of crossing the arms. Pain, 152 (6), 1418-23 PMID: 21440992

Comments

  1. I have one word to say on this…. KNITTING! :)

    ian stevens Reply:

    Betsan , that was a real coincidence . A colleague the other day told me that David Reilly ( a well known Dr in Glasgow) went to N British Pain Society meeting and said your talk/poster on knitting with ongoing pain was the most interesting thing there!
    Tell me, do you have many (any) men doing the knitting ?I know years ago it was a male activity but not any more ..maybe a revival of the skill would be useful in these austere times as well as getting the homoncular maps moving.

    Betsan Corkhill Reply:

    Ian – David Reilly was the chair for the morning. I have to say it was a bit daunting giving a talk on knitting at the Royal College of Physicians (never envisaged myself doing that!) but the feedback suggests it went down really well!

    We do have men knitting but we take a different approach with them. I introduce it by first reassuring them that they don’t have to become ‘a knitter’ and that we would like them to use knitting as a tool to achieve a specific aim. This could be to facilitate a meditative state, reduce anxiety or improve sleep patterns for example, We’ve also had a lot of success with PTSD symptoms. They’re normally happy to use knitting if they have a reason for doing it (and something they can tell their family!). Young boys respond well if it’s introduced as ‘soft construction’. As you know knitting used to be a male occupation so many men over 60 will have learnt as boys so will only need a quick reminder.

    The men don’t like coming to the group, though and prefer to knit at home. We’ve noticed that conversation becomes quite deep, quite quickly when people are engaged in the automaticity of knitting so the conversation in the group (being mostly female) very quickly gets into topics the men don’t enjoy. I’ve come to the conclusion that ‘single sex’ groups could be the way forward for many reasons.

  2. I wonder if this is why I had less pain when I went into massage therapy then into Myofascial Release therapy…
    Maybe there wasn’t much to my theory of taking care of my body better, awareness of triggers, stellar bodywork received? Actually just kidding, but this is fascination. When I next find myself standing arms crossed, I will think of it differently. As I do when I sneeze.
    Lorimer! I sneezed in my Biology class the other day, and flashed back to your class at RIC Chicago over the past weekend…. missed the next three sentences my prof said.
    Thanks for that, and for that amazing experience at RIC.

    Ami

  3. Jo Oliver says:

    OK Lorimer I have just crossed them..how long before the electric shocks stop and do I leave them crossed until they do? Waiting!!

  4. The more we can understand about our sense of self the better we can be at re-training the changes that occur in chronic pain states. This line of thought certainly adds to this in terms of spatial representation. I like to discover as much as possible about the individual’s experience, not just of pain but also their sense of self. The altered perceptions of body shape, size, temperature etc., I think are fundamental to the experience as much as the pain. Targeting the brain and other contributing systems (immune, endocrine etc) to create an opportunity for change requires such insight. In many cases of chronic pain such as CRPS, the awareness and guarding extends beyond the limb and into ‘space’ around, so is certainly important to consider. Remapping of our ‘space’ perhaps?
    Great stuff.

  5. Lorimer, I’m sorry… there always has to be one party pooper, right? You sound way too confident in your blog post about the results.

    I’m not a fan of this study… If it were on the Gong Show, I’d gong it.

    I promise, I won’t ding it due to a small sample size – your group already mentioned that in the discussion. I’m most definitely not a researcher, but I am someone who spends quite a bit of time in the clinic working with people. From a clinical perspective, standard error measurement should have been included. In Table 1, the electrical stimulation behavioral response is SO close for crossed or uncrossed that in my opinion, there isn’t a difference between the responses. Just rounding numbers, the standard deviation is a +/- 10… so, seems to be some definite overlapping which washes out any difference between the groups. Not reporting a standard error kills it for me because, I’m willing to bet the error is going to be large enough to make the couple of point differences meaningless. Laser stimulation has basically a 3-4 point difference between the conditions and a tighter standard deviation… but again, what’s the amount of error in subject’s perceptions? On a scale of 0-100, if perception error is 5 points… ummm… yeah.

    In working with people, I know there is always some level of a learning curve. So… a huge missing factor is test-retest reliability. If you took those same subjects and ran them through the ringer again 4 hours… or even 2 days later, will the results be the same? I’m not confident results will be the same. In fact, I’m willing to bet a drink of your choice that there would be even less differences between crossed and uncrossed results.

    Do I dare ask why Experiment 2 wasn’t run concurrently with Experiment 1? Sorry, but it made zero logical sense to do Experiment 2 like that… I would have voted to snag data during Experiment 1. Keep it nice and simple, you know?

    Figure 3… garbage. A figure should be readily and easily understood. That figure… I’m holding my tongue. I don’t like it. The whole top aspect of the figure makes no sense to me. What was the value in adding right side of space and left side of space? The body obviously DID note stimulation… and I’m not truly sure there is really a difference between positions.

    Please, rest assured, this isn’t the only time I have been disappointed in research results being pumped out by researchers I respect. It’s happened before… it’ll happen again. I just couldn’t hold my fingers back – I honestly do not find what I read as smashingly fabulous.

  6. lorimer says:

    Hi Selena –
    There clearly is no doubt that you couldn’t hold your fingers back! I appreciate your honesty in saying that you do not find it smashingly fabulous. Does this imply that you are expecting all on BiM to be smashingly fabulous? Excellent! I am disappointed and embarrassed that i have given the impression that i was claiming the study was indeed ‘smashingly fabulous’. To blow one’s trumpet is very uncool and I am a bit mortified to have come across like that.
    i would like to respond to a point you made with which I agree with and a couple of others with which I don’t. Although I would like, at some stage in life, chat about the vigour of your response perhaps over a beverage, which it seems, you will be buying, I will write a bit here on the points you raise. I agree very much with the observation that people have a learning curve – i think this is evident in many places, not just the clinic. However, i would argue that, from an experimental setting, this is a different issue to test-retest reliability, which in my understanding refers to variability of a result each time it measures the same thing. Determining the reliability of a protocol is an important part of all our experiments – it is almost taken as given that we would determine that, unless we are doing a brand new thing, which we were not. Our protocol was very reliable. The crossed arm effect is also very robust and I would like a glass of 1951 Grange Hermitage please.
    Regarding the two experiments, i think it is a very valid question, but our reasoning was this: Undertaking a second experiment actually reduces the chance that it is an erroneous result. We think it actually makes it less simple to undertake the evoked potential study first in case the act of recording brain activity somehow changes pain. I am open to that possibility. We would argue it is better science to do two experiments with fewer confounders than one experiment with more confounders.
    I am disappointed you hate Fig. 3 with such enthusiasm! I can see it is a bit hard to decipher – perhaps we fell into the ‘becoming so familiar with the picture that we forget it is tricky’ trap. I think however, that it tells an important part of the story and i also think the inclusion of left and right space labels is important when we consider the proposed mechanism. that is, it is possible to cross your arms but not have them on the opposite of space.
    I reckon a bit of scepticism is a very good thing – that you don’t think there is a difference between positions is an interpretation of these data that i do not share. I don’t think your interpretation is supported by the statistical results. Although I think there is a difference, it is clearly very small – too small to be clinically significant. From our perspective, that is not the point. However, we tried to make it clear in the manuscript that the magnitude of the effect is not the thing – we think the fact that the effect occurs is the thing and the congruence between our hypothesis and the cortical evoked responses seals it for us. That we observe analgesia when we conflict these frames of reference is, we think, a potential opener of new directions. I can absolutely see, however, that this aspect that we find exciting is contingent on the existence of the effect. I can see that when you think the effect is not really there, then the rest is immaterial. Thanks a mil for speaking up in your time of discontent Selena. I hope my responses make sense.

    Selena Horner Reply:

    When 103 people “like” something that is as holey as Swiss cheese, I have a problem. The results of this study are not strong and truly shouldn’t lead anyone to believe there is a difference between crossed and uncrossed because key factors were not discussed in the paper.

    I don’t owe you any beverage yet. You danced around my questions, Lorimer. Right now I am envisioning your pelvic tilt dance. ;)

    When I mentioned test-retest reliability, forgive me for not being clear… I was most definitely not referring to the study protocol. I was referring to the 8 subjects in Experiment 1. Will they perceive the multiple stimuli consistently over time and report their perceptions consistently on the 0-100 scale? If their behavioral response changes over time, then what was shared in Table isn’t the true behavioral response. We all know the nervous system adapts. What would happen if a second trial to determine test-retest reliability of their behavioral response was done? Would the nervous system adapt? If the nervous system adapts, then reality is nothing was learned because the odd challenge of being aware of sensory stimuli while being crossed over the midline was adaptable. The nervous system just needed a little practice to get itself up to speed in this particularly new positioning.

    And… standard error measurement? No comment there? Now, now… that is highly, highly relevant. I just blogged over at Evidence in Motion and mentioned pain and minimal clinically important difference. The standard error of measurement for the 0-10 pain scale is 1.02 for individuals who have low back pain. In the crossed arm study, subjects were reporting on a 0-100 scale. So… what is the standard error of measurement for the rating the intensity of perception? For giggles and kicks, say it is 5. Well, if it is 5, again, there is no difference found in the behavioral response between the two postural situations.

    One other thing was really, really bothering me on experiment 2. Nothing was shared in the paper about the validity of the evoked potential from an EEG… or it’s sensitivity or specificity. I honestly didn’t find any relevance with experiment 2. Experiment 2 wasn’t tied to the subject’s reported perception. I think that bothers me a ton because when a person reports a perception there is some interpretation of the sensation occurring within the brain – the brain is getting a bit of a workout to make that interpretation. Experiment 2 didn’t catch any of this. Plus… the electrical stimulus energy in Experiment 2 didn’t even match up with what was provided in Experiment 1. It was a higher level of energy. You can hang your hat on it, if you want… but for me to find clinical relevance, a bit more information about the test should have been included in the paper. It’s kind of like… I had a patient who was attending physical therapy with low back pain. The lady didn’t speak English, had a history of breast cancer 2 years prior, didn’t go in for her annual “cancer free” check up, seemed to have another red flag (based on what a daughter was telling me)… sensitivity and specificity are hugely important – I contacted the ortho surgeon to ask if he thought radiographs had the sensitivity/specificity to rule out a metastasized tumor. I knew they didn’t… but he didn’t want to get involved and suggested I contact the oncologist. Is an EEG the gold standard for measuring brain awareness of sensory stimuli? And… what is its sensitivity, specificity and test-restest reliability? Is it valid? I’m not trying to be dense, I really don’t know and the paper didn’t include anything about the evidence or properties for this particular test measure.

    A good figure won’t require explanation to the target audience; a good figure isn’t “tricky;” a good figure makes a lil light bulb go off in one’s head. A good figure can be burned in your brain and can be used over and over again to help with clarifying a concept. That figure was supposed to help explain a concept that is thought to exist… it does a very, very poor job of doing so. Or my main hang up – the research isn’t strong enough or even there to substantiate the figure. I still don’t like it… won’t be whipping it out of my pocket any time soon to share with anyone.

    There may be an effect… there wasn’t enough statistical information truly shared in the paper to determine any effect. I’m not being skeptical – I’m pretty open, but what is being suggested most definitely wasn’t proven in this paper. My interpretation is based fully upon the lack of specific statistical information.

    David Reply:

    It is good see healthy debate about science on this site, smashingly fabulous actually. My impression of the article was, interesting finding, this is nice pilot work. The authors freely admit there is a small effect, which, as Lorimer points out, is interesting in and of itself. However, the authors did not publish the effect size or the power associated with the study. So I’ll not dance around the issue, what was the effect size? The Power? I would say that more frequently now clinicians are looking for these markers to make decisions about whether a piece of research is meaningful to their practice. It’s not perfect, but when faced with overwhelming amounts of articles, continuing education guru’s gospel, science journalism, and the inevitable patient driven Google search claiming that a cracked egg under their bed can cure diabetes, clinicians are hard pressed to make decisions about what is good science and what is crap; effect size and power are two tools that can greatly help this process.

    I doubt that most clinicians are experts in EEG, to those that are, kudos and please accept my apologies. A little background for the rest of us to show the validity of studying N2 P2 and N1 waves of EEG with respect to sensation would go a long way. It’s quite difficult for me to judge if an EEG squiggle is a valid measure of the physiology of sensation and that such small variances between the waveforms, while perhaps significantly different, actually mean anything in terms of effect without some background.

    For me, the biggest issue in both this lively discussion and the article is that neither provide enough detail to allow colleagues to judge for themselves. The more information shared the greater the ability, if so inclined, to make our own analysis and judge for ourselves whether this is a valid finding. This is healthy scepticism. I’m not sure why we consider or use sceptic or scepticism like it is a bad word. Scepticism is the spirit of science; it means asking questions about doubts and investigating the evidence before accepting a statement as true.

    I applaud the discussion posted here because it speaks to the spirit of science. Let’s get more specific in the information we, as scientists, present and present it cleaner, (figures and graphs always seem straightforward to the maker, but rarely are) so that we can ask better questions and make better experiments. I’ll buy a bottle for the three of us.

  7. lorimer says:

    I agree that clinicians are looking for effect sizes. We would hope of course that no-one mistook this study for an efficacy study, in which effect size would rightly be emphasised. In fact, in this study, these stat’s might be very misleading, which is why we would not put it in the paper. The effect size is very similar across the modalities and intensity level, Cohen’s d is about 5. Although this corrects for dependence between means, using Morris and DeShon’s (2002), equation 8, it is affected by the a-priori decision regarding significant figures of the correlation between means, which we set at 3. Power is also variable, but approaches 1. We were very reluctant to use these stats in the paper and post for the very reason that they are very open to misinterpretation should the finer details of the study not be well studied and should the mistake be made that this is a measure of effect of a new treatment for clinical practice. Re EEG – there are good writings on this and I suggested the keen reader have a look at Charlie Spence’s book, but there are good writings by the authors – Iannetti, Gallace in particular that would nut it out. We did not lie in the paper about what we thought the previous research suggests about the nature of the evoked response, so what we said there, in my view still stands. I would just rewrite it here.
    I take under consideration the suggestion that more information in papers and blog posts would allow more readers to make sense of it all and its place in their world. I might also suggest, however, that the paper first went through the normal rigours of scientific peer review, which is a reasoned, respectful and constructive process of feedback, rebuttal or modification, with experts in the field. This process ends ultimately with acceptance or rejection. The paper was published in the premier journal in the field, but, admittedly, it is a journal that does not see itself as having a remit to present all papers to all reader-levels. With regard to the post, I have clearly failed some readers in that I have not provided enough information to bridge the gap from the science to the people. That is why we do BiM so this is one in which we missed the mark perhaps. BiM is not somewhere that we seek peer-review on the quality of the study – i do not think it is well suited to that and i think we are better assisted by using conventional channels. I very much appreciate the spirit of your contribution David – to help me improve my attempt to increase understanding. I am sorry I came up short in this instance. I hope the new information helps. L

    Selena Horner Reply:

    Lorimer,

    Statistics is most definitely not my forte… but… effect size isn’t tied just to efficacy! Effect size is used to determine if there is a difference between two situations. The issue with effect size is more in the thought pattern of whether statistical significance is clinically relevant. I think it is relevant to determine a comparison between the means to see if there is a difference. Was the behavioral response or the EEG stuff different in comparing arms at side to arms crossed?

    I’m not sure how you figured your Cohen’s d for your blog response, but I’m not getting anything close to 5 with my figures. WAY below 5… I’m calculating a small effect only in experiment 1 with laser stimulation. Actually, I cheat… online stuff is helpful for crunching numbers: http://tinyurl.com/3gd6pj4

    Experiment 1:
    Electrical stimulation
    Energy 1: Cohen’s d = .1199 with an effect-size r = .0599
    Energy 2: Cohen’s d = .199 with an effect-size r = .0995
    Energy 2: Cohen’s d = .1439 with an effect-size r = .0717

    Laser Stimulation
    Energy 1: Cohen’s d = .3125 with an effect-size r = .1544
    Energy 2: Cohen’s d = .4 with an effect-size r = .1961
    Energy 3: Cohen’s d = .341 with an effect-size r = .168

    Experiment 2
    Electrical stimulation
    N1 wave: Cohen’s d = .144 with an effect-size r = .072
    N2-P2: Cohen’s d = .176 with an effect-size r = .088

    Laser
    N1 wave: Cohen’s d = .191 with an effect-size r = .095
    N2-P2: Cohen’s d = .190 with an effect-size r = .095

    Cohen labeled an effect size as small if d = .2 and r = .10
    Based on that interpretation, the only place there is a statistical small effect size is in Experiment 1 with laser stimulation stimulation for all three energy levels.

    Researchers should hold the reader’s hand. I do think it would have been generously kind to have included information on the EEG… a reader doesn’t have time to go read some book! A reader depends on the researcher. Three sentences on ERP isn’t explanatory enough.

    None of my questions were related to the peer-review quality of the paper. I was asking very pointed, very specific questions to better formulate my interpretation of this work. Just because a paper IS published in a premier journal in any field doesn’t mean squat to me. Just because a lot of people seem to like the findings doesn’t mean squat to me either. I still have to critically think and be responsible and come up with my own conclusions based on the facts and statistics shared within the article. Hopefully any work published is actually utilized in the clinical world (if it is clinically relevant). And… clinicians have a responsibility to basically defend clinical decisions. A clinician has a stronger argument/defense when the types of statistical information I found relevant are included. A clinician can’t say… well, the work was published in Pain or JAMA or PTJ… and a clinician can’t say the work was done by a great researcher… the clinician has to, at times, defend clinical decisions with statistical information, like standard error of measurement, effect sizes, minimal clinically important difference, sensitivity, specificity, validity and reliability.

    Appreciate your time… appreciate your work. With this particular piece of work though, it sure seems the findings have been substantially elevated. In all the cases, except experiment 1 laser stimulation, the effect didn’t reach statistical significance.

  8. I think as Lorimer points out, the important (interesting, exciting) thing is that the effect occurs at all. He also quite clearly states that ‘this is the beginning’ – just the first step in exploring another idea outside the box of conventional thinking about pain. For me, that is the greatest value of this blog…. it encourages us all to comment, participate and to think outside that box. It’s hugely refreshing.

  9. lorimer says:

    A very quick one – we need to allow for correlation of means, as per Morris and deshon. Will read properly later. L

    Selena Horner Reply:

    Ummm… well… okay.

    If I use this site: http://cognitiveflexibility.org/effectsize/ and using Morris and DeShon’s (2002), equation 8

    What I posted above is basically the same. Due to the way this calculator rounds:
    Experiment 1, Energy 2 reached a Cohen’s d of .2
    Experiment 2, Laser reached a Cohen’s d of .194

    So, again… no where near a Cohen’s d of a 5. A 5 is like an extremely large effect. You’d be doing a jig just looking at the data because you’d know darn well there was a huge difference between the two sets of data.

  10. Betsan, I agree. It is a start point from where we can develop greater understanding. I applaud the author’s endeavours and certainly believe that thinking out of the box is one way in which we can achieve this. All discoveries start somewhere.