Saturday, April 26, 2014

061 - Experimental Pertussis

This is a pretty neat study, though it kinda makes me cringe. You’ll see why.

Whooping cough is a respiratory infection caused by the bacterium Bordetella pertussis, though apparently there was a lot of debate about the pathogen (bacteria or virus?) in the first half of last century. This study was designed in part to test that.

The other part was testing whether Louis Sauer’s vaccine made from B. pertussis could protect against the disease (which would be another indication of its bacterial cause). So how did it go?

H. and E.J. Macdonald, a physician husband and nurse wife team, intentionally exposed four healthy brothers aged 6 to 9 years to cultures from a separate whooping cough patient. These boys, it turns out, were their own sons. Now that’s dedication to science!1

Two of the boys, the 9-year-old and one of the 8-year-old twins, had been vaccinated by Sauer 5 months before, and the other two (8 and 6 years) had not. None had any previous exposure to pertussis.

The team took a cough plate culture from someone with typical whooping cough and grew cultures from it on agar, checking under a microscope to make sure it was a pure culture. Half of the growth on this plate they suspended in saline solution, and then filtered it through a filter with pores small enough to remove bacteria from the solution, presumably leaving only viruses, if there were any. The other half of the growth they suspended in saline without filtering.

To start, they squirted a little of the filtered solution into the boys’ nose and throat, then quarantined them in a rural apartment with their mother (the nurse) for 8 weeks. They didn’t come down with any symptoms within 18 days, long enough for whooping cough to show up, so it didn’t seem to be some virus present in the culture.

So then after the 18 days, the team squirted some of the unfiltered suspension into the boys’ nose and throat. They aimed for about 140 bacteria total per boy. First the vaccinated results: neither of the two vaccinated boys had any symptoms or sign of whooping cough in the whole period of 38 days. Cultures from their throats and such were consistently negative.

On the other hand, the unvaccinated boys started coughing after only 7 days. Cultures were rated as ++++, which seems very positive, even from the beginning. Over the next few weeks, their fever and coughing increased in severity, they started whooping and vomiting food and mucus, stopped eating much, and had headaches. Seems pretty miserable. Then they got better, fortunately.

After recovering, the team tested the antibodies of all four boys, as well as two others each that were known to be immune or non-immune, and found that all were positive except the two known non-immunes.

So what could be concluded from this: as few as 140 cells is enough to cause an infection. B. pertussis is the agent that causes whooping cough. Seven days is the incubation period (at least here). Possibly also that the vaccine works pretty well.

On the other hand, it’s definitely a small sample size (2 patients in each group), and there was no blinding or placebo, but it gave very distinct results in a very controlled situation. All of them were known to have been exposed to enough pathogen to cause disease, and none could’ve been exposed from somewhere else. The populations were pretty matched too: two of the boys were twins, one vaccinated and one not. But one could argue that it’s not good enough.
As a minor question, I’m not even sure why they would’ve thought there would be any virus on the culture plate, unless they thought it were stuck to and replicating along with the bacteria or something…
And finally, the cringe-y part: this seems so unethical based on my understanding of standards for medical research these days, exposing children to a potentially deadly disease, but at least we can benefit somewhat from the results.

Some others agree with me in some ways and make observations:
"In...1933 the Macdonald husband-and-wife team performed an experiment on their four sons, from which they concluded that 'a filter-passing virus plays no role in the etiology of pertussis.' The wife, a nurse, sequestered herself with the boys in a rural apartment for eight weeks...Aside from proving that there are hazards in being born into a physician's family, and that B. pertussis could cause whooping cough, the findings did not really exclude the possibility of a direct or indirect role for viruses in the disease. It would have been a hardy virus to survive through two subcultures on agar medium."2 [Though later studies confirm the result.]
"In 1933, Sauer vaccinated 2 of 4 brothers; all 4 brothers were then inoculated in the nose and throat with whooping cough bacillus. The 2 hapless controls (sons of a local physician) developed classic cases of whooping cough while their vaccinated siblings remained healthy."3
The four boys.
Source: National Library of Medicine, and Baker 20003

Citations:
1. MacDonald, H. & MacDonald, E. J. Experimental Pertussis. The Journal of Infectious Diseases 53, 328–330 (1933).
2. Nelson, J. D. Whooping Cough — Viral or Bacterial Disease? New England Journal of Medicine 283, 428–429 (1970).
3. Baker, J. P. Immunization and the American Way: 4 Childhood Vaccines. American Journal of Public Health 90, 199 (2000).

Wednesday, April 23, 2014

A Note on Researching Vaccines (or anything else)

A lesson from my own experience: I've been looking at a lot of vaccine-related websites from both sides recently, for this blog and in general. Some provide lists of allegedly research publications that allegedly show some kind of problem with vaccines, some go through all those publications and allege that they are worthless and/or unrelated, and some are the same kinds of lists from the other side (that vaccines are safe and awesome).

And I've found that my feeling of the weight of the evidence depends on which kind of site I'm going through at the moment. If it's a list of allegedly anti-vaccine research, I feel the weight of the evidence is on that side. And vice versa.

Fortunately I recognize that making judgments and conclusions from such feelings would be highly biased and susceptible to error. It's not the number of studies that matters, but rather the quality and relevance, to make a fair, rational judgment of the evidence, one must go through it all and evaluate it all as objectively as possible.

Basically my point is, don't rely on feelings of which side has more evidence, because those feelings depend on what you have been exposed to (or even just been exposed to more recently), and you might've missed something. So instead of relying on feelings: compile, catalogue, and calculate, whenever possible.

Monday, April 21, 2014

060 - The Corrected Average Attack Rate from Measles Among City Children

Today’s post is not directly related to vaccines, but indirectly: it’s about measles epidemiology, or the observation of patterns of measles in populations over time; how many cases, in which ages, when it’s fatal, etc.1

Specifically, A.W. Hedrich suspected that reports of measles cases in cities didn't always indicate the true level of measles that existed; the reports were incomplete. So he calculated a correction factor that should help health workers determine if their reports were complete, or estimate what the true rate might be.

The rate of measles varies seasonally, attacking more in winter than in summer (like the flu I guess), but it also cycles up and down in what’s called “epidemic swing,” as you can see in Figure 1 from the paper. Sometimes there could be 13 times more cases in one year than in the next.

Figure 1: Reported measles case rates. Baltimore, MD. 1897-1927. Hedrich 1930.
This is because in a high year, many people are infected and become immune naturally, so there aren’t as many susceptible people to be infected the next year. Levels of immunity might even be high enough to produce some herd immunity effect, where the virus can’t transmit from infected people to susceptible people, because the only contact between those groups is via immune people (who block the transmission). So that’s a low year. But as more people are born, the proportion of susceptible people rises until there’s another epidemic. That’s the natural cycle of measles, in cities at least.

This cycle made it difficult to compare between cities though, because obviously comparing a low year in one city to a high year in another would be inaccurate. So it’d be better to compare averages, say over ten years, to even out the variation.

Measles is pretty much a disease of childhood, or at least it was in pre-vaccine days in cities, because hardly anyone avoided it for that long, and generally one time is enough to be immune for life. (Not to say it can’t infect adults if they’re susceptible; see Panum's report on measles in the Faroes to see what the disease could do to a completely susceptible population.2) But in these days, almost everyone in cities had been exposed by age 15, so Hedrich decided that comparing case rates in people under 15 would be the best strategy. This was especially true because including those over 15 could introduce bias in cities that had a lot of immigrants from the countryside, who were often over 15 but still susceptible (since measles didn't spread as well in rural settings due to low population density), so that could inflate the case rate.

Hedrich compared some surveys of different cities, figuring out what proportion of the population had ever been exposed to measles by their 15th birthday. It was pretty consistent between cities, countries, and over time that this proportion was about 95%.

Figure 2: Measles history rates by age. Hedrich 1930
So one might think, if reports of measles cases over different ages up to 15 don’t add up to 95%, they’re incomplete, and one can calculate a correction factor from that! But one thing this doesn't take into account is the children that have died before reaching age 15, either from measles or from other causes. The 95% figure is based on surveys of living children. So Hedrich looked at some data to see what measles mortality was and if it could affect the correction factor.

He found that in Baltimore from 1906 to 1915, measles killed about 4 out of every 1000 children under 15. The deadliest age was around 1 year old, with about 14 in 10000 dying from measles. This isn't necessarily indicating severity at these ages; it could be that the longer one lived, the more likely one had already survived measles.

Figure 3: Data from paper, figure I made. Deaths from measles per million people in Baltimore at a given age.
But anyway, this allowed calculation of the correction factor, and it turned out that fatal cases of measles didn't really affect it much. Though this wouldn't be the case with diseases that had higher mortality, or even sometimes measles epidemics that were especially deadly (like in Aberdeen, Scotland from 1883 to 1902, where the estimated death rate from measles was 2 of every 100 people; pretty scary).

Using this correction factor, Hedrich calculated with remarkable consistency that on average, 6.5% of city children under 15 get measles each year. He discusses a number of potential confounding factors that could introduce error but decides they don’t change the results significantly. So this could be useful for further study of measles epidemiology.

A number of later papers cite this one as important for later epidemiology, but I think some may have confused this paper with another of Hedrich's, since I didn't find what they say is there in it. Still, it’s interesting:
"Hedrick [sic] demonstrated, in Baltimore, that measles epidemics did not develop when the level of immunity was above 55 per cent. Though all the figures do not necessarily apply to urban areas, his findings do point out that considerably less than 100 per cent of the population need become immune before an epidemic is prevented or halted."3
"Based on the study of Hedrich (1930), Sencer et al. (1967) estimated that in Baltimore during the period 1897-1927 a level of immunity of 55 per cent was sufficient to prevent the development of epidemics."4
"The meticulous studies by A.W. Hedrich of measles diffusion in Baltimore from 1897 to 1927 formed the basis for epidemiological studies of measles for nearly 35 years. By carefully tabulating monthly measles rates and correlating them with the proportion of the population under fifteen years of age, Hedrich was able to develop a ratio of susceptible to immune children and thus account for fluctuations in the incidence of measles. It was determined that when the level of natural immunity exceeded 55 percent, the diffusion rate decreased. However, children escaping epidemics were still susceptible, and as more children were born, the number of susceptibles was augmented. Increased numbers of susceptibles led, in turn, to further epidemic fluctuations in measles."5
Citations:
1. Hedrich, A. W. The Corrected Average Attack Rate from Measles Among City Children. Am. J. Epidemiol. 11, 576–600 (1930).
2. Panum, P. Observations made during the epidemic of measles on the Faroe Islands in the year 1846. Bibiliothek for Laeger, Copenhagen 3R, 270–344 (1847).
3. Kogan, B. A. et al. Mass measles immunization in Los Angeles County. Am J Public Health Nations Health 58, 1883–1890 (1968).
4. Griffiths, D. A. The Effect of Measles Vaccination on the Incidence of Measles in the Community. Journal of the Royal Statistical Society. Series A (General) 136, 441–449 (1973).
5. Pyle, G. F. Measles as an Urban Health Problem: The Akron Example. Economic Geography 49, 344–356 (1973).

Saturday, April 19, 2014

O723 - The Neurotropic Virus Diseases

Quote of interest:

"By comparison with the ephemeral effects of antiserum, the protection afforded by vaccination is relatively long-lived. The chief disadvantage of all methods of producing active immunity is the comparatively long time they require to lead to results. While the long incubation period of rabies permits the adoption of such procedure, in acute diseases such as poliomyelitis it is useless to think of vaccination when the patient or animal is already infected. It must be carried out beforehand in anticipation of the coming epidemic. And this means that in the case of a disease like poliomyelitis, which in this country [England] relatively seldom causes serious epidemics, it is very improbable that public opinion will ever be educated to the point of wholesale vaccination. But looking to the future and speaking quite generally, one would be inclined to forecast that in both human and animal medicine vaccination will ultimately prove of greater value than serum therapy."

Citation: Hurst, E. W. The Neurotropic Virus Diseases. The Lancet 226, 758–762 (1935).

Sunday, April 13, 2014

059 - Small-Pox and Vaccination in the Light of Modern Knowledge

In this post, James McIntosh reviews some things about smallpox and vaccination against it.

Smallpox has been known to humanity since the 10th century, and to Europe since the 16th. People confused it with measles at first, and so thought it fairly mild, until it killed some royalty. Mostly it was only fatal in children (not that that's a good thing); 90% of deaths in epidemics were in children under 5. Mortality in people who caught it was typically 30-50%, which is very high for an infectious disease. And virulence seemed to be increasing through the 18th century, so people were excited about immunization.

Somewhat later, virulence seemed to shift toward older people and decrease over all, probably because of immunization, but also because another variety of the disease seemed to appear: called alastrim, or variola minor, the disease it causes is much milder than the original, even though they are almost histologically and serologically indistinguishable. Alastrim doesn't seem to make vaccination impossible (which it might if it induced an adequate immune response itself), but vaccination does prevent alastrim. But it did not replace smallpox, which still caused epidemics just as serious as before.

Regarding smallpox itself, McIntosh was uncertain whether Edward Jenner's original virus was really cowpox (vaccinia) or was rather an infection of cows with smallpox. I haven't read anything so far that does make a clear distinction between these possibilities. But in either case, it seemed safer than the practice at the time, which was called variolation: inoculating people with a little smallpox, which would cause disease but not as much as if they caught the disease unintentionally, and would induce good immunity. That practice sometimes didn't work out well, as you might expect.

So when Jenner's vaccination (from vaccinia) came along, the practice spread widely because it was safer and milder but just as good immunity-wise. Not completely safe though, as I mentioned before (040): post-vaccinal encephalitis was a serious side effect from vaccination, in which the immune system seemed to attack the nervous system, often causing paralysis and/or death. The incidence of this was 1 in 3555 recipients, or 1 in 31531 in children under 2, which isn't very many, but a lot more than would be preferable. McIntosh had some suggestions for avoiding this, but none he was very certain about: treating cases with serum from vaccinated people, maybe, or preventing it entirely by weakening the virus before vaccinating; this latter had the risk that it might be too weak to induce a good immune response. McIntosh thought it should be possible to standardize and minimize the dose as much as possible to let the body respond to it before it spread too much. I wonder how successful any attempts at that might have been.

Citation: Mcintosh, J. Small-Pox and Vaccination in the Light of Modern Knowledge. The Lancet 215, 618–621 (1930).

Thursday, April 10, 2014

O712 - Poliomyelitis; a review of its natural history

One interesting quote:
"It can be postulated that, in regions where poliomyelitis is endemic and where epidemics do not exist or have but lately made their appearance (Japan), the native populations are immune as a result of early exposure to poliomyelitis virus. This is also reflected in the young age distribution of cases. As a possible mechanism for the production of this immunity, it has been observed that sanitary conditions are more primitive in endemic areas than in countries afflicted by epidemics; primitive sanitation would tend to promote constant general dissemination of virus leading to exposure at an early age and the development of active immunity. In countries where sanitation is good or improving, dissemination of virus is generally less, opportunities for immunization are proportionately decreased, and consequently there develops periodically a population ripe for epidemics.
"This hypothesis seems not unreasonable and deserves to be tested by investigations carried out in endemic regions with primitive sanitation for the purpose of determining: 1) the detectability of the virus in the population and environment, (2) the development of antibody in relation to age, and, 3) the number of different immunological types of virus and their relationship to strains isolated in countries where epidemics prevail."
Citation: Ward, R. Poliomyelitis; a review of its natural history. Pediatrics 1, 132–138 (1948).

Saturday, April 5, 2014

058 - Immunity in Influenza: The Bearing of Recent Research Work

In 1939, people had already discovered the influenza virus, but there were still a lot of questions. For example, how can one distinguish between illness caused by this virus and very similar illnesses caused by many other things? And are all flu viruses the same? In this paper, C.H. Andrewes addressed these issues.

People had observed that major outbreaks tended to be the real flu, while minor outbreaks were typically something else. Andrewes called the former "epidemic influenza" and the latter "febrile catarrhs." Clinically, the real thing tended to be sudden in fever onset, and have more symptoms like headache and general achiness rather than the cough and sore throat characterizing other things. But there was a lot of overlap.

Mostly they tried to distinguish by infecting ferrets. The real flu tended to infect ferrets while the other stuff didn't. But it could've been possible to infect ferrets with more than one virus; it was hard to tell.

Serologically (that is, regarding the antibodies that bind to a virus), there seemed to be at least 4 different kinds of flu antigen. They hadn't found good ways to characterize these yet. Antibodies against one wouldn't bind as well, or at all, to another.

Regarding duration of immunity, they had found that in animals it lasted a few months, and Andrewes thought that duration seemed to correlate to body size, so it might last up to 1 year in people. Not sure if this is valid. But in any case, it declined over time, at least in ferrets.

In ferrets, subcutaneous vaccination worked for moderate immunity, not great. But moderate might be enough: humans wouldn't normally encounter very much flu at a time; and if it didn't prevent it completely, it could reduce the severity so that more people survived; and if people had some immunity already, it could increase it.

But studying immunity from vaccines is tough, because epidemics are infrequent and unpredictable. So people were trying to use antibody levels as a proxy. Some showed that antibodies and immunity correlated well, especially in ferrets, but it was unclear.

Andrewes tended to use a vaccine of virus inactivated by dilute formaldehyde. This seemed safer than a live attenuated vaccine, because a virus changed to be less virulent could always revert to become more virulent.

Lastly, he discussed the issue of when is best to vaccinate people. Epidemics in England were happening about every 4 years, but this wasn't reliable. Usually they happened in December or January, so the best time would be 1-2 months before, in October and November.

Citation: Andrewes, C. H. Immunity in Influenza: The Bearing of Recent Research Work. Proc R Soc Med 32, 145–152 (1939).