Chapter 9.

Testing the Underprivileged

Background Reading: CDC, U.S. Public Health Service Syphilis Study at Tuskegee and Varmas and Satcher, Ethical Complexities of Conducting Research in Developing Countries and Kass, Just Research in an Unjust World

The Tuskegee Syphilis Study

"Tuskegee," pronounced "tuss-KEE-gee," is a Cherokee Indian name for a small region in central Alabama and the indigenous people who once lived there. It is now the name of a small U.S. town.

The Tuskegee Syphilis Study began in 1932 when hundreds of African American men with syphilis in the small town of Tuskegee, Alabama were told that they were receiving treatment for their condition by scientists working for the U.S. Government. Instead of being treated, these patients were studied with the aim of learning about the effects of the untreated disease.

Syphilis is a bacterial, sexually transmitted disease. Its effects, if left untreated, are completely miserable. Untreated initial sores and ulcers in the genital area can spread all over the body causing a painful rash. If left on the order of 15 years can infect the nervous system, the cardiovascular system, produce large tumors on the organs, bones and skin, and is eventually fatal.

Patients in Tuskegee were studied with their untreated disease for as long as 40 years, from 1932 to 1972. Their suffering was immense, and during that time they infected dozens of others, and gave rise numerous congenital syphilis births.

That's not all. In 1943 penicillin was introduced as a cheap, effective means to treat and cure syphilis, and it soon became the standard widespread treatment. But patients in the Tuskegee study were denied access penicillin so that the study could continue, even while being told that they were being treated.

In 1965, a social worker and epidemiologist named Peter Buxton was hired by the US Public Health Service department, when he learned about the Tuskegee study. He lodged several official protests over the next few years that were all rejected by the US Government. The US Center for Disease Control argued that it needed to continue the study until all the original patients had died.

The aim of the study had since become to learn about the physical effects of syphilis through post-mortem autopsies on Tuskegee patients. The underprivileged patients at Tuskegee generally agreed to this before they died, in return for a modest amount of money for their families to pay for their burial.

In 1972, Buxton became a whistle-blower when he decided to leak details of the study to the Washington Star. His story began with the line, "For 40 years, the U.S. Public Health Service has conducted a study in which human guinea pigs, not given proper treatment, have died of syphilis and its side effects." This led to a media frenzy and public outcry.

In the wake of the scandal, the study was closed. Congressional hearings on the matter led to the creation of the Belmont report discussed in the first lecture of this course. Tuskegee patients and their families received $10 million in settlements.

Tuskegee was not the only operation of its kind. For example, it was recently revealed that a very similar experiment was carried out concurrently in Guatemala, also led by the U.S. National Institutes of Health. There, patients were actively infected with syphilis and gonorrhea against their knowledge, with the aim of studying the effect of different treatments on them.

During the decades following World War II, experiments were also carried out by many Western countries on the effects of nerve agents on soldiers. This was sometimes done on a volunteer basis, although the volunteers almost never knew what they were getting into. Here is some footage of British soldiers being studied to learn the extent to which LSD could incapacitate troops.

(Source)

Many similar studies were carried out in those years in the U.S. and in Europe, which also included infecting hundreds of their soldiers with nerve agents like the deadly Sarin gas to sailors without them knowing.

Tuskegee's Legacy

Tuskegee is undoubtedly a dark episode in the history of scientific research.

But it is also in part responsible for our modern bioethics infrastructure. It was deemed to be such an important event that an entire field was formally defined by governments around the world so that this kind of thing would not happen again.

But part of the legacy is also the very difficult question, what sorts of testing is appropriate on underprivileged people, if any?

The HIV Testing Controversy

In 1994, a study commonly known as ACTG 076 (NEJM / PubMed) produced the first very strong evidence that HIV-positive pregnant women can be treated to help avoid congenital HIV being passed on to their unborn infant. The anti-retroviral drug Zidovudine (or azidothymidine, or AZT, or ZDV) was found in a large-scale controlled study to reduce the congenital transmission rate of HIV by 2/3.

You will recall that retrograde viruses like HIV produce DNA segments that are inserted into the host cell, reprogramming it to produce more viruses. Anti-retroviral drugs like Zidovudine disrupt one or more of the steps in this process. (See a video about how this works.)

Zidovudine has since been shown to significantly reduce the replication of the HIV in adult patients as well.

These studies can only be done by the formation of a test group and a control group. The first "test group" receives the treatment being studied. A second "control group" receives no treatment at all, so that the effects of the treatment can be studied.

In this case, both groups consisted of women from the United States and from France. And it was quickly determined that the infants in the test group had a much better chance of a healthy birth.

When the results came in, the study was immediately canceled. Why? Because you can't deny the opportunity for treatment to people in the control group, since it is now known that the treatment can offer them a significant chance of improvement. This would seem to be one of the important lessons of Tuskegee.

However, the Zidovudine treatment is still not ideal in many respects. It has significant side effects. And a 2/3 improvement in healthy birth rates is not as strong as we would eventually like to see.

Thus, in spite of halting trials in Western countries, many United States (as well as some United Nations) funded studies continue to be carried out in developing countries. In these studies, Zidovudine is usually not offered to at least one group of patients, in order to compare the effects of other treatments. These countries include Cote d'Ivoire, Uganda, Tanzania, South Africa, Malawi, Thailand, Ethiopia, Burkina Faso, Zimbabwe, Kenya, and the Dominican Republic.

Analogies and disanalogies

The analogies between these studies and the Tuskegee study are obvious. But there are also disanalogies.

In the HIV testing controversy, informed consent is always obtained. The studies are welcomed by the government, and proposed procedures and their known effects are carefully explained to all patients before they decide whether or not to participate. No such informed consent was obtained in any of the scandalous studies of the mid-20th century.

There are also some reasons why Zidovudine is not yet practical in developing countries. To be effective, it must be administered under careful modern hospital conditions. However, the countries with the worst HIV burdens tend to also have large populations that give birth outside of these controlled settings.

The Zidovudine treatment is so expensive as to not be sustainable for most patients in developing countries. It costs around £500 per infant to successfully administer. In many of these countries, the average amount spent on health care per individual is less than £20 per year. So, many of the effects of the Zidovudine tests will simply never be experienced by the societies that are participating in them.

Zidovudine and its side effects are also less safe in malnourished populations, or populations concurrently carrying other diseases. This poses a particular challenge for its administration in some developing countries as well.

Nevertheless, in order to administer any new tests, there must always be a control and a test group. There are ways to do tests without these controls, but they are unlikely to be as definitive.

This is what makes the practice so controversial. There exists a treatment, albeit not entirely practical, but which is not being provided to people whose lives would be saved by it. Yet these participants understand the situation, and still volunteer for the study, and the researchers hope that what they learn will save lives.

Is this relevantly analogous to Tuskegee?

In the reading by Nancy Kass, you have read about another study that bears many respects to this one, done to determine how effective different lead-reduction techniques being carried out in underprivileged Baltimore homes in the 1980's and 1990's were in improving the blood-lead levels of children in those homes.

Is that case relevantly analogous to Tuskegee?

The Kass Guidelines

The major ethical conventions including and following the U.S. Belmont report typically center around the importance of informed consent. But in these cases, informed consent is not as much of a problem as it was in the original Tuskegee and other mid-20th century studies.

And if the difficulties researchers are currently struggling with are of a totally different type, then the ethical principles that are currently relevant may require new efforts to even formulate.

In Just research in an unjust world, Nancy Kass suggests that a harm reduction strategy for researchers is the kind of guideline that is missing. A crucial feature is that she does not propose that researchers aim to eliminate harm. (So, "Do no harm" is out here.) Instead, she proposes a strategy for reducing it.

When is harm-reducing research acceptable? Kass argues that it's when seven thresholds are met.

  1. Accessibility of the 'Gold Standard' intervention. A treatment may provide the perfect cure for a particular condition. But is it really practically accessible to the people who need it?
  2. Strong researcher track record of implementation. According to Kass, this is the most important thing. Whether or not researchers are producing spectacular findings isn't as relevant as how they get implemented. Researchers with a proven track record for implementing harm-reducing change should be favored strongly.
  3. Other evidence that suggests the community will be reached. The reseacher's track record isn't the only thing that matters. If there are donors, local participants, or others involved with a track record for implementation, they should be factored in as well.
  4. Benefit for the individuals participating in the study. This is not "make or break" according to Kass. It could be outweighed by if, say, the track record for implementing change were strong enough. But it should certainly weigh in a researchers favor.
  5. Susceptibility of the community to exploitation. Research should be taken on with more hesitation the more susceptible a group is to exploitation.
  6. Use and development of local procedures. Local procedures for making ethical decisions about research, if they exist, should be properly used and respected. If they do not exist, then researchers should endeavor to help make sure they get put in place, so that future research decisions can be made by the local community.
  7. Addressing the underlying problem. Researchers should aim to improve the underlying problem. For example, researchers doing needle-exchange work should also make drug abuse treatment available.

In science more generally, not all research is guaranteed to produce a community benefit, nor should it be required to. But Kass' basic thesis is that research on underprivileged communities must benefit people. Not necessarily the community itself, but it must have a strong chance of benefiting someone, and ideally the community in which the research is being done.

Are the Kass guidelines the correct ones?

Some may worry that it still allows a great deal of testing on developing countries. It also restricts a great deal of testing, such as research that may not have an immediate beneficial effect, but may turn out to someday.

Whether or not these effects are desirable depends in part on how the guidelines get implemented in practice. Could they provide an improvement on the present situation? Can we do better? I leave it to you to decide.

What you should know

Practice Questions On to Chapter 10 »