Posted by: lindseynewkirk | April 16, 2014

Manipulating The Masses

The power that the media has in influencing populations based on message framing and presentation patterns as highlighted in Images of China (Li, 2012) reminds me of George W. Bush’s and the media’s messages to America when launching the “War on Terror”. I glanced through an article I found; “Framing the Truth: U.S. Media Coverage during the War on Terror” (Wiggins, n.d.) where the author points to news media embodied the “us-versus-them” frame. Given that the framing of news stories has a massive impact on how people perceive a given issue and how they should interpret them I think the media was incredibly irresponsible in that message. The fear that they evoked through this core message severely altered people’s attitudes towards Muslims causing a ripple effect of unnecessary judgment and hate towards an otherwise peaceful population of people that had nothing to do with the small fraction of Muslim terrorists blamed for the attack.

In another article, “Does Watching the News Change our Attitudes about Political Policy: A Terrorism Case Study” (Brinson and Stohl, 2012) I found this closing remark to be quite appropriate in looking at the media’s responsibility in message framing around politics: “The media have the responsibility to not only provide “the news,” but also to ensure that they provide the public with the context and background to enable the public to evaluate the information contained within it.”

Posted by: Melissa De Lyser | April 16, 2014

Media framing/Images of China: Is the bias only in media?

After reading Images of China, I would not disagree that the Australian media is biased against China.  

But I do take exception to some of Li’s arguments.  Li writes that feature stories get softer “postcard” play than economic and/or politically focused stories.  I don’t think that’s necessarily representative of bias. “Hard” news stories are played differently than “soft” news stories in all media.  That’s more a matter of readership volume than media bias.

I also question Li’s bias.  Li writes that the Australian media framing tactics included the “reactivation of the public memory of the Cultural Revolution and the Tiananmen Square protests of 1989.”  Li cites media framing theory, where “issues are consciously or unconsciously influenced by historical-cultural-economic-political factors.”  Is Li free from bias on this topic?  Can he objectively argue that Australian media references to China’s human rights violations are without news value? 

Li also write that the Australian media calls China a communist government with a capitalist market.  At one point, Li himself describes China as a “nominally communist state.”  Is the media’s incorporation of the communist government/capitalist economy based on bias or fact? 

 In our Orientation discussion of Jonathan Gottschall’s The Storytelling Animal, we debated whether facts exist. We all add personal bias to our stories – whether as journalists or general conversationalists. We are storytellers, and stories contain bias. In the end, I think it comes down to a measure of degree.

Posted by: lindseynewkirk | April 10, 2014

More Than ‘Do No Evil’

It is disheartening to realize that countless publications have been developed over the past few decades to address unethical research atrocities. Questions of innate human nature aside, the guiding principals used in qualitative research are more that a set of rules for ‘doing no evil’.

Even the most well designed project has the possibility of an ethical issue arising in data collection, many of which will not be black or white. Though they don’t provide hard answers, guiding principals do ground researchers in ethical inquiry so that they can make the best decisions within specific contexts if ethical considerations do arise.

In our presentation tonight, Scott and I will take you on a journey of research ethics, including several situations of which you get to decide – What Would You Do? Here is one to contemplate:

In a study exploring people’s experiences of recovery from a heart attack, an interviewee expressed extreme feelings of worthlessness resulting from his health condition, which meant he was unable to work or to undertake activities he viewed as part of his male identity. Feeling he might be depressed and at risk, the interviewer suggested that he talked to his doctor about his feelings but he said he didn’t want to do that as all the doctor would do would be to give him more medication. He also commented that he didn’t want his wife to know or she would worry. The researcher promised confidentiality but was concerned about his mental health. Should she tell someone about him and if so, who? (Riles, 2012).

Posted by: swhee1er | April 10, 2014

Ethical questions loom large for Big Data

The unique ethical challenges posed by the internet become especially knotty when applied to the analysis of large data sets, or Big Data. Take the issue of informed consent. Instead of a “mailing list with 100 or 1000 subscribers” (Eysenbach & Till 2001, p. 2), Big Data researchers deal with subject populations many times that number. Since asking researchers to obtain consent from every user who comprises these data sets appears to be unrealistic, does that make any research conducted using Big Data ethically questionable?

Similar questions arise if one turns to the issue of harm. Given their size and diversity, accurately diagnosing the risks in researching large data pools remains a difficult task at best. Users’ comfort levels with their online activities being analyzed are bound to differ, and determining those users who find it acceptable and those who do not may prove no more viable than obtaining consent from every user involved. Even if researchers manage to resolve (or sidestep) this hurdle and anonymize the data, it has been demonstrated that it is not only possible but also relatively easy to “de-anonymize” that same data.[1] Considering the potential for injury, this last point seems especially damning, not just of studies using Big Data specifically, but also of internet studies in general.

One potential solution would be to scrub the data of all possible identifiers, which would presumably protect all subjects but also limit the data’s utility. Is this an acceptable solution, or does a better one exist?

[1] For a fascinating (and troubling) look at how easily data can be “de-anonymized”, see http://arstechnica.com/tech-policy/2009/09/your-secrets-live-online-in-databases-of-ruin/.

Posted by: B. Scott Anderson | April 10, 2014

Ethics, research and dilemmas

Ethics — particularly research ethics — is an interesting topic for a variety of reasons, one of which is that there is a difference between research ethics and ethical research.

When it comes to these topics, things get really muddy when the internet becomes involved because there’s a sense of privacy and anonymity in particular situations. Is it possible to safeguard yourself against any unwanted online intrusions of a researcher who is looking to collect data? How can it be done? Where do children and social networking come in to play here?

These are a few questions we’ll look at Thursday in class. We’ll also be taking a look at hypothetical situations when research and ethics intersect.

Here’s one of the dilemmas we’ll present Thursday and we hope it will get you thinking about what you would do if you were different sorts of ethical research situations.

Dilemma:

Dr. T has just discovered a mathematical error in a paper that has been accepted for publication in a journal. The error does not affect the overall results of his research, but it is potentially misleading. The journal has just gone to press, so it is too late to catch the error before it appears in print? 

Posted by: Mike Plett | April 10, 2014

Navigating an ethical minefield

Researchers must tread cautiously when conducting research on online communities, especially those serving vulnerable populations — such as children — or cover sensitive topics — such as sexual abuse. To help navigate these potential ethical minefields, Eysenbach and Till (2001) suggest that researchers and institutional review boards should consider the following issues before studying an Internet community: intrusiveness, perceived privacy, vulnerability, potential harm, informed consent, confidentiality and intellectual property rights.

Of the preceding issues, I must admit it didn’t really occur to me that some participants in online groups may actually be seeking publicity, thus they may consider their posts to be their own intellectual property. If this is the case, it makes sense to get a participant’s explicit consent before quoting posts verbatim. But I wonder how often this is an issue.

It seems to me more likely that researchers will run into trouble by unintentionally violating people’s privacy. I can easily imagine a researcher, with every intention of protecting someone’s identity, inadvertently quoting enough of a post that the author could be identified via a Google search.

One way researchers can avoid ethical pitfalls is by consulting the Association of Internet Researchers (AoIR) ethical guidelines, which stress the idea that there is no “one size fits all” approach to the unique ethical challenges presented by internet research. It’s important for us as researchers to realize that — depending on the context — the rights of subjects may outweigh the benefits of research.

Posted by: Natalie Henry Bennon | April 10, 2014

The debate over public v. private is older than the Internet

The debate over what’s public and what’s private is probably as old as mankind, or at least language. It is certainly older than the Internet. It has been hotly debated at least since the beginning of newspapers as citizens and journalists struggled to discern what information is public, which people are considered public figures, and when is the public good more important than privacy. Many kinds of research are conducted in the interest of the public good. Other kinds are not – Coca-Cola’s research into Coke One, for example. I could argue from either side on pharmaceutical research. Public good or corporate greed? Hmm.

In this week’s readings, the idea of public versus private information looms large, and the Internet has blurred the lines. A written journal or handwritten letters to a friend can easily be considered private – unless written by or sent to a public figure. An online journal, or posts to Facebook or via a small discussion board – I can see how these create ethical dilemmas for researchers. Moreover, sometimes people may be writing in a very public forum but be addressing someone specific, and forget that their words can be seen by hundreds of others, or a researcher.

What are the protocols or standard practices for developing trust in online communities? What are the standard practices for making participants unidentifiable?

Posted by: kgaboury | April 9, 2014

Ethics in online research – Kevin Gaboury

When an ethical issue comes up during online qualitative research, researchers should look to one of the AOIR’s key guiding principles for ethical decision making in Internet research:

 “The greater the vulnerability of the community/author/participant, the greater the obligation of the researcher to protect the community/author/participant.”

There’s a general misconception that whenever something is posted online, it automatically becomes part of the public sphere.  However, Internet communities and message boards are something of a gray area because members often assume that their thoughts and opinions will remain private. Whereas on social networking sites like Facebook or Twitter, buried somewhere in the fine print of the user agreement, is the condition that all posts or “Tweets” become part of the public domain unless the user chooses to make them private.

I believe researchers should carefully consider the subject matter of the online community before beginning data collection. In cases where extremely personal issues, like sexual abuse, are discussed, researchers should always inform users of their intentions and receive permission before using their remarks in research. Those who join online health-related communities, like cancer support forums, do so out of a desire to connect with and lend support to those in similar situations, not be guinea pigs for research. Most would not appreciate their remarks being published in a study without their permission.

Should people in online communities expect privacy? Or are their remarks fair game for researchers?

 

 

 

When conducting qualitative research on internet communities, researchers should take into consideration participants’ perception of privacy. While posting on the internet may mimic publishing or speaking in public forum, Eysenbach and Till state that “there are important psychological differences, and people participating in an online discussion group cannot always be assumed to be ‘seeking public visibility.” Web-based platforms can provide researchers with a wealth of information, but researchers must realize that that privacy of the participants or at least what participants perceive as private, come first.

Corporate business groups should also consider the “perception of privacy” that employees may have when posting comments on intranet sites. While it may be understood that views expressed at work are ultimately owned by the company, there should be an element of caution from corporate researchers when wanting to use  information found. As a gesture of respect for employees, corporate researchers should reach out to employees to obtain permission to use content and communicate to employees that internal forums could be used for research purposes. While the Eysenbach and Till article focuses mostly on ethical issues surrounding patient support groups, the same ethical considerations can easily be translated over to the corporate world.

Should the same ethical issues surrounding patient support groups, be considered with internal employee discussion boards? Or should employees assume their views and opinions are not private?

Posted by: Melissa De Lyser | April 9, 2014

De Lyser – Ethics – For April 7

When it comes to conducting research within online communities, some might substitute “exploitation” for the word research. Researchers certainly have an ethical, if not legal, obligation to adhere to any internet community guidelines prohibiting researcher contact. Violating these guidelines is clearly unethical.

As Eysenbach and Till point out, communities without guidelines create a more ambiguous environment. If a community doesn’t specifically preclude “outside” participation, a researcher could reasonably argue that contacting members isn’t an ethical violation. It is reasonable to assume that internet communities without specific exclusions are open to everyone? I understand, and sympathize with, the privacy argument. However, if the host/sponsoring organization has not made efforts to protect participants’ privacy, is it the researcher’s responsibility to do so?

Even if researcher activity is accepted, other ethical considerations come into play. The internet has the capability to dehumanize. Markham et al., 2012, raise the question, “Is this a text or a person?” The authors mentioned protection of vulnerable populations. Though this issue is not unique to online communities, the “distance” between researcher and subject in the online environment makes it difficult for researchers to gauge subjects’ mental/emotional stability. Authenticity is also a factor. Is the subject who she/he says he is? Does he/she really have the characteristics/experiences the researcher requires?

The IRB regulates research on human subjects. Should a similar body be created to oversee privacy in online communities? Should participants in online communities expect that level of privacy?

« Newer Posts - Older Posts »

Categories