Archives for category: Uncategorized

Comments made in week 8/9

https://psuc6b.wordpress.com/2011/11/19/the-importance-of-case-studies/#comment-14

http://thgoatse.wordpress.com/2011/10/28/misleading-advertisements-their-lies-truths-stats-and-impact/#comment-30

http://saspb.wordpress.com/2011/11/22/quantivite-versus-qualitative/#comment-18

http://cjcpb.wordpress.com/2011/10/28/significant-or-useless/#comment-29


A fifth comment was made here(http://cjcpb.wordpress.com/2011/10/28/significant-or-useless/#comment-30) please do not mark this comment, as it was merely to provide links to the studies, because the hyper links In my previous comment did not work.
Thanks. 

I’m sure we’re all aware that research can be costly, PET scans for example can cost as much as £1000 an hour.But are you aware that who funds the research may effect the results? In this week’s blog I will be examining the funding bias and asking the following questions: What is it? How and why does it occur? And finally what can we do about it?

Funding bias is the tendency for the results of a study to support the interests of those funding it. A number of meta-analyses have shown that if the company funding the research manufactures the drug on trial the results are more likely to show support for the drug (Becker-Brüser, 2010; Baker et al, 2003). This is worrying as it may result in ineffective or possibly dangerous drugs reaching the public. It’s important to realise that this isn’t limited to pharmaceutical research and evidence has shown funding bias exists across a number of industries such as Tobacco (Turner & Spilich, 2006) and mobile phone companies (Huss et al, 2007) to name a few.

Now that we’ve identified what is it, I want to ask why it occurs? Of the all the questions that this blog attempts to answer, the question of “why” is probably the most difficult. In this articleFlorence Colantuono suggests the reason for funding bias lays in human nature, stating that researchers may feel a sense of loyalty and a desire to please the company. So before we start accusing these researchers of cold-heartedly meddling with their findings for financial gain let’s try and see things from their perspective. Pharmaceutical companies face tremendous pressure, health services and the public rely on them to provide treatments and cures. New drugs have to go through a rigorous series of tests before they’re released(read more here) this can take around 10-15 years of on going research. Over this time a great deal of time and money is riding on a particular outcome. If they were to find that the drug is ineffective then all that time and money is wasted. This doesn’t excuse them, but I’m sure you can see how a researcher will succumb to a certain decision in these particular circumstances. Researchers will sometimes be employees of the company, in which case financial incentives are awarded for completing research early or for dedication to the company. Prochaska, Hall, Bero, 2008 found that tobacco companies based research grants on the likelihood of the research producing a favourable results

You may think that these manipulations would be obvious to those viewing the paper, however In this article Richard Smith a retired editor for the British medical journal claims this is not the case. He states that researcher do not “fiddle directly with results” as this would be “far too crude, and possibly detectable”. This leads us on to the penultimate question of how funding bias occurs. Smith and his colleagues have compiled a list of “tricks of the trade” used to achieve desired results. These include: testing a drug against a less effective drug, testing the drugs against substantially lower and higher doses to seem comparatively less toxic or more effective. You will know that a variety of different variables can serve as a measure of the same construct e.g. stress could be determined by heart rate, levels of cortisol ect. Researchers may choose measures that are likely to reflect more favourably in the results. In the article the example given refers to research on perchlorate exposure (an ingredient in rocket-fuel). Scientists working for perchlorate manufacturers used a single thyroid hormone as a measure of harm. These scientists found that safe levels of exposure were close to three times higher than that of a NAS(National Academy of Sciences ) experiment who used a different measure. Florence Colantuono in the previously mentioned article suggests that companies will withhold or delay the publication of undesirable research, this is supported by Prochaska, Hall, Bero, 2008 who that found tobacco companies would withhold the publication of unfavourable results.

So far we’ve looked at what funding bias is, why and how it occurs, but what can be done to resolve this problem? Florence Colantuono suggests a number of changes can be made to research policy to reduce funding bias:


1) Keeping contact between the researcher and the company to minimum
2) Grants should not be based on outcome of speed of research
3) Funders should not withhold research or prevent the publication of results

 

These suggestions seem to make sense, but I feel are a overly simplistic. However I’m not going to attempt to solve the problem of funding bias in a blog. Instead I’d like to focus on what we can personally do, after all my goal is to educate the reader(or at least get your started somewhat). You’ve taken a good first step, in that now you’re aware of the funding bias. But now the goal is to incorporate this knowledge into your evaluation of papers. Most papers will include funding information on the first few pages, google the company that funded it, ask the question “what are their motives in this research?” Ask if the results may have been influenced by the funding, because the research we’ve reviewed suggests that there’s a good chance of that being so.

References:

PET Scan Information – http://www.scandirectory.com/content/pet-scan.asp

Becker-Brüser, 2010 http://www.ncbi.nlm.nih.gov/pubmed/20608245

Baker et al, 2003 – http://www.ncbi.nlm.nih.gov/pubmed/14645020

Huss et al, 2007 – http://www.ncbi.nlm.nih.gov/pubmed/17366811


Turner & Spilich, 2006 –
http://www.ncbi.nlm.nih.gov/pubmed/9519485

Prochaska, Hall, Bero, 2008 –

http://schizophreniabulletin.oxfordjournals.org/content/34/3/555.full

Florence Colantuono – http://www.experiment-resources.com/research-grant-funding.html

Information on Cancer drug trials – http://cancerhelp.cancerresearchuk.org/about-cancer/cancer-questions/how-long-does-it-take-for-a-new-drug-to-go-through-clinical-trials

Richard Smith – http://www.washingtonpost.com/wp-dyn/content/article/2008/07/14/AR2008071402145.html

 

 



 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


http://ohhaiblog.wordpress.com/2011/10/14/are-ethical-considerations-when-practising-deception-overshadowing-the-progress-of-psychological-research/#comment-38

http://prperc.wordpress.com/2011/10/28/the-decline-of-the-qualitative-method/#comment-14

http://psucd6psychology.wordpress.com/2011/10/28/a-behavioural-study-of-obedience/#comment-16

https://re3ecca.wordpress.com/2011/10/28/pop-psychology-harmful-or-helpful/#comment-29

From the blogs I read last week the majority seemed to view ethics as too strict, and claimed that they limit research practices unnecessarily. However I didn’t see any mention of the core reasons why these ethics are important, beyond the moral justification of it being “the right thing to do”. So in an attempt to level the playing field a little, here I am arguing that ethics rarely restrict research, and their strictness is beneficial for psychology.

Many bloggers made the point that in adhering to ethical principles we sacrifice the validity of the results, in an attempt to avoid harm to participants. It was argued that this is a poor trade off, as the as potential harm is generally minimal. An example given was the dilemma of informed consent in observations, doing so compromises the validity of the results and after all why can’t you just get the validity after observing? Well ethics is a very much a catchall system and is rather general in its terms. This prevents unethical studies from slipping through the net, which could occur with vague and less rigid ethical guidelines. So rather than explicitly listing cases when consent is needed, it simply says that you must obtain consent. But doesn’t this overly restrict research? Not at all, in reality studies can get leniency on certain guidelines if it is deemed necessary. In fact deception is used in up to 50-75% of published reports (Adair, Dushenko & Lindsay, 1985). However this can only be used if the study requires it, not just because it would make things easier for the researcher.

People give the example that Milgram’s study would never make it past ethical guidelines these days, but Milgram’s study has been replicated many times since (links included below). So as you can see rigid ethics are important for preventing ethical studies from slipping through the net but ethics can be readdressed to an extent in individual cases if it required, thus rarely restrict research.

However studies have shown that most people did not mind being deceived (http://psp.sagepub.com/content/14/4/664.abstract). If people don’t mind being deceived then what is the harm? Similarly Milgram reported that most (82%) participants reported were glad to have taken part. A few studies have shown that in general most people are largely indifferent to the matter of deception. However dissatisfaction amongst even a few participants in each study can have a big impact collectively. In the Milgram’s experiment of the forty participants, five participants weren’t glad to have participated (12%). Considering the number of studies that occur every year, this degree of dissatisfaction would be devastating to psychology as whole if it occurred regularly. Certainly these are extreme examples, but it shows how disregard for ethics can be cause dissatisfied participants. As I’m sure you’re all aware psychology has a history of controversial studies; Milgram, Harlows Monkeys, Zimbardo to name a few. These certainly are some extreme and rare cases, but I’m sure your aware of the impact that even a few extreme examples can have on the public view of psychology. Just think about which studies are best known to the public when you mention psychological research. Though I wasn’t able to find a more scientific example to illustrate my point here, Let’s just view the top 5 searches when I google “Famous psychology experiments”. Zimbardo and Milgram can be seen on all of them, with one of them being the wiki page of Milgram. Little Albert and Harlow’s monkeys amongst others can be seen. In order to move beyond this we need to ensure a sound moral code. This isn’t just important for ensuring psychology as a respectable and acknowledged science but also to ensure future research. If participants don’t feel their safety and rights as humans are important to the researchers, then they are unlikely to put themselves forward as participants. Many of last week’s posters seemed to show a higher regard for the research than the participants. A perfect example of this came from the following blog http://saspb.wordpress.com/it’s very easy to loose sight of the fact that the research findings are what’s most important here and if it means deceiving someone a little or not getting their signature on a piece of paper maybe its worth it”. To saspb Just to make it clear I’m not picking on you personally. Similar remarks were made by a number of bloggers, yours just best typifies this attitude. The research is incredibly important, but without participants there is no research.

To conclude, hopefully I’ve shown the need for ethics and that they aren’t as restrictive they sometimes seem. Below I have included some links to some Milgram replications.

Links to replications of Milgram:

http://www.plosone.org/article/fetchArticle.action?articleURI=info%3Adoi%2F10.1371%2Fjournal.pone.0000039

http://www.roddickinson.net/pages/milgram/project-synopsis.php

http://thesituationist.wordpress.com/2007/12/22/the-milgram-experiment-today/

http://psycnet.apa.org/psycinfo/1972-24881-001

Links to my comments:

Week 3: Who Should Support Research Projects Financially?
Reliability versus validity and the balance between experimental control and generalizability
“Why do we use the scientific method – and are there other ways to go about the process of
http://thgoatse.wordpress.com/2011/10/14/drug-statistics-to-accept-or-not-to-accept-that-is-the-question/#comments

Evaluate the usefulness of Qualitative research methods. Qualitative methods such as case studies and interviews certainly have their place in science, especially in the social sciences. However qualitative research possesses a number of key flaws, and as a result is used less than quantitative methods. Shuval et.al(2011) claim that qualitative research made up a mere 4.1% of research published in medical journals in 2007.

One important use of qualitative research is in the formulation of hypotheses. Case studies such as those of rare cases can have a large impact in that they galvanize the scientific community. A classic example of this is The Case study of H.M. For those you not familiar with this case I’ll summarise somewhat. H.M was a patient who underwent brain surgery, in which his hippocampi was removed in an attempt to cure his severe epilepsy. This resulted in devastating affect on his ability to form new memories. As it would be unethical to purposefully cause perform lesions such as this for research purposes, cases such as this are invaluable to our understanding.

However qualitative research such as this lack generalisability, as a result they may be used in developing hypotheses but lack value in themselves as studies(Dogan & Pelassy,1990). For example decisions about which treatment the NHS should use for Schizophrenia could not be based case studies. Although case studies of schizophrenia could be used to enrich our understanding of schizophrenia, and as a result aid in the development treatments. However before implementing such treatments quantitative research is required to assess it’s usefulness in the wider population.

Another key problem with using qualitative methods is the lack of control of variables, which is crucial for ensuring validity of results. Some would even go as far to say that due to their lack of control qualitative methods are scientifically valueless(Campbell & Stanley, 1966). This isn’t entirely the case, as mentioned above such methods can help with the formation of hypotheses and serve as a basis to build around. Nevertheless the lack of validity and generalisability makes it is difficult to develop knowledge. Considering that the aim of science explain phenomena in the world around us, means that qualitative research will always have limited use in science.

References:

Campbell, D. T., & Stanley, J. C. (1966). Experimental and quasi-experimental designs for research. Chicago: Rand McNally

Shuval et.al(2011) claim that qualitative research made up a mere 4.1% of research in 2007.

Dogan, M., & Pelassy, D. (1990). How to compare nations: Strategies in comparative politics (2nd ed.)

As we are able to choose any topic relevant to research methods.. I thought I’d go for something different.

Another week, another blog. The topic up for debate this week is “Is it possible to prove a research hypothesis”. Proof is simply defined as sufficient evidence to show something as true. A hypothesis is a supposed explanation for a phenomenon which can be tested. So now that the key terms are defined lets get down to the matter at hand. Is it possible to prove a research hypothesis? The answer… no, we cannot prove any of our hypotheses in an observational science and here is why.

 In observational sciences we gather data through inductive reasoning, which is making observations and using them to make generalisations. So an example may be that, every old woman I have ever met has been called Doris or Ethel(observation), therefore I can assume there’s a good chance that the next old woman I meet will be called Doris or Ethel(hypothesis). We could then perform countless studies in which we asked old women their name, and every time our results could support my hypothesis. However just because this is the case for the people we have studied doesn’t mean we can assume it’s true for all other old women. “ No amount of experimentation can ever prove me right; a single experiment can prove me wrong” – Albert Einstein. David Hume takes a similar stance in staying “we are never justified in reasoning which is from repeated instances.”.

 A common counter-argument which follows is that each observation that supports our hypothesis brings us a little closer to proving it as we’ve eliminated one more possible uncertainty from the list. Unfortunately not, the following example despite overused demonstrates why this is the case beautifully. If you successfully start your car each morning does that mean that is more likely to successfully start tomorrow? With each time that you successfully start your car, does this increase the possibility that it will continue to start at every attempt for the rest of time? Of course not, so as this example has shown, we cannot generalise with certainty from previous observations.

 Although we’re unable to prove a theory we are able to disprove as shown in Einstein’s quote e.g. We could disprove that not all ravens are black, with the observation of a single white raven. So theoretically if you were to disprove all antitheses(hypothesis which contradicts our hypothesis) you could prove a hypothesis. However in reality identifying all anti-thesis is impossible. To conclude I feel it’s important to point out that despite being unable to prove theories, we can be very confident of a theory. Which in science seems to be the next nearest thing. A great example is gravity, despite being unproven it is a theory that almost all physicists would accept to be true, to the extent that developed theories since often rely on it being true. So to summarise; is impossible in observational sciences as they are based on inductive logic. 

The question up for debate this week is whether having an understanding of statistics is beneficial. The answer…Yes and this doesn’t only apply to those looking to go into careers which require statistics, but to the laymen too. Having an understanding of statistics enables us suitably evaluate the myriad of numbers which bombard us daily, and thus guard yourself from being fooled or manipulated by them. So let’s see an example of a statistic I saw recently; “100% of people saw improvement in just 1 days! After 4 weeks, they had clearer skin and after 8 weeks it stayed clearer”. So to most people this product would seem like a great choice, if 100% of those tested showed an improvement and continued to show improvement it’s certainly going to work for you and I right? Not quite, a quick glimpse at the small print gives us a better idea at what is really being said here. The statistic is actually composed of two separate tests, in which two separate products within the kit are tested on two separate groups. However, the two statistics are slyly linked with the vague use of the word “they” to imply it’s one group. This infers is that everyone who used the kit experienced improvement, and continued to do so for 8 weeks. Whereas the reality is that a small group of 30 people found that the isolated use of 1 of the 3 products from the kit lead to an improvement. Another separate group of 31 people who trailed a different product within the kit found a improvement over 8 weeks. Therefore this statistic is pretty useless at telling us how effective the kit is as a whole, when all components are used in combination the way in which customers are directed to use it. As you can see from this example, a very minimal use of statistics understanding has shown us that the reality of this particular statistic is far from what it claims. In conclusion having a basic understanding of statistics can enable us to dissect figures given to us and determine their true merit, this overall enables us to make informed decisions in our everyday lives.

The statistic and small print mentioned above can be found at the following link:
http://www.clean-and-clear.co.uk/advantagekit