Why Do Not Nerve Cells Regenerate?

Why Aren’t We More Like Fish and Frogs?

The question of why the mammalian central nervous system does not regenerate after injury is of extra-ordinary interest at many levels. In terms of descriptive biology, it is remarkable how great the discrepancy is between nerve cells that can, and cannot repair their connections after their axons have been lesioned.

In an invertebrate such as the leech, in fishes and in frogs the central nervous system does show effective regeneration and restoration of function after complete transection. Thus, a leech can swim again after it’s nervous system has regenerated after being cut in two, and a frog can catch flies with it’s tongue after it’s optic nerve has grown back to the tectum(the dorsal portion of the midbrain, containing the superior colliculus and inferior colliculus).

In these “simple” animals the wiring is far more complex than in any man-made circuit, yet somehow fibres grow to find their targets and form effective synapses upon them. In this they resemble their counterparts in the mammalian peripheral nervous system.

What makes the mammalian central nervous system so different in this regard?

At the cellular and molecular level, differences between non-regenerating and regenerating neurons and the satellite cells that surround them are the focus of intense research. Detailed information is accumulating about molecules that enhance or inhibit growth, as well their receptors. And at the level of clinical medicine, there is the essential question about whether and when treatments can be devised for patients with central nervous system injuries so that functions can be restored.

Recent experiments at all of these levels have provided unexpected new findings and insights. Yet one of the most striking features of the field of regeneration today is how many key questions remain. For example, while we have clues, the mechanisms that prevent regeneration in mammalian CNS are still not fully known.

Why is the proportion of axons, that actually elongate, so small, even when the application of suitable techniques does give rise to successful growth across a lesion?

What changes in molecular mechanisms of growth occur in immature mammals during development, that later prevent regeneration in the adult?

While it seems reasonable to guess that understanding of growth promoting and inhibiting mechanisms will continue to proceed rapidly, a baffling question remains. It arises from our ignorance about normal development of the nervous system.

 

 

At present it is not known how specific synapses form, so that one type of cell is selected as a target while another one sitting just next door is ignored. 

If hope is to be offered to patients with spinal cord lesions, axons must not only grow (obviously a prerequisite for repair) but they must reform useful connections with the appropriate targets. In the best of all possible worlds no errors would be made. One also can imagine a scenario in which incorrect connections are formed and subsequently tuned by use; pain fibres would, one hopes, not re-form connections in patients.

All the neuroscientists who work on these problems have to face inevitable and quite natural questions about prospects for therapy.

A convenient analogy seems to me to be the repair of a watch. A desirable requirement would surely be to have an understanding of how the watch works and what the various components are doing. Without that knowledge one can still hope for some new insight or fluke that will allow the repair to be made. It would, however, be dangerous to promise how soon the watch will work again until the failure has been diagnosed and only one or two parts remain to be replaced. Because we are not even remotely at this stage in our knowledge of the nervous system, predictions about how and when seem unrealistic. (This analogy is of course flawed: the nervous system has to do the job on it’s own once one has provided the appropriate conditions).

In contrast to the plasticity of the brain, in the spinal cord the degree of plasticity is much less, although perhaps currently underestimated. Once the long tracts are severed or compressed to the point of axotomy, they will not recover and there is usually insufficient overlap in function in the spinal cord for the missing functions to be taken over by surviving tracts, should there indeed be any. The spinal cord is such a narrow structure, normally well protected by the bone forming the spinal canal, that any injury sufficient to damage it in part, may well be severe enough to damage it completely.

In the general strategy of devising spinal repair procedures that could eventually be applied in patients, there are at least four problems to be overcome:

  1. Central nervous system neurons show a variable response in their ability to produce neurites in response to injury, in contrast to peripheral nervous system, which show a consistent ability to do this.
  2. Following damage to the CNS, as for example in a spinal cord injury, any neurites that do appear at the site of injury are unable to cross it, which in patients may involve a substantial length of spinal cord.
  3. Once methods for promotion of growth of axons across the site of injury are available in a clinically applicable form, the axons may have to grow considerable distances to reach appropriate targets and may require specific guidance cues to direct them to functionally appropriate targets.
  4. Having reached appropriate targets, effective functional re-innervation of the targets should occur.

It is not entirely clear how separable are these four components of successful repair. There is increasing evidence that they may indeed be substantially separate processes and that achievement of one will not automatically lead to success with the others.

Thus the work described by Beazley & Dunlop on regeneration in the lizard shows clearly that while axotomised fibres can regrow to the appropriate targets in the visual systems in this species, no functionally effective innervation occurs.

Beazley and Dunlop describe different features of a wide range of species from cold-blooded vertebrates to mammals. Particularly with respect to effectiveness of target re-innervation, there appears to be a spectrum of regeneration. This goes from amphibia and lampreys (with particular respect to the underlying subcellular structures that may be responsible for regenerative outgrowth of injured axons) which can regenerate not only new axons but also functional connections with appropriate targets, through lizards that show excellent axonal growth, but inappropriate target innervation, to mammals in which neither regenerative axon growth nor appropriate target innervation normally occur. The evolutionary significance of this progressive loss of regenerative  ability (an ability which is even more marked in invertebrates) through the animal kingdom is unclear.

The central nervous system of adult mammals, including humans, recovers only poorly from injury. Once severed, major axon tracts (such as those in the spinal cord) never regenerate. The devastating consequences of these injuries—e.g., loss of movement and the inability to control basic bodily functions—has led many neuroscientists to seek ways of restoring the connections of severed axons. There is no a priori reason for this biological failure, since “lower” vertebrates—e.g., lampreys, fish, and frogs—can regenerate a severed spinal cord or optic nerve.

Even in mammals, the inability to regenerate axonal tracts is a special failing of the central nervous system; peripheral nerves can and do regenerate in adult animals, including humans.

Why, then, not the central nervous system?

Neuron injury

At least a part of the answer to this puzzle apparently lies in the molecular cues that promote and inhibit axon outgrowth.

In mammalian peripheral nerves, axons are surrounded by a basement membrane (a proteinaceous extracellular layer composed of collagens, glycoproteins, and proteoglycans) secreted in part by Schwann cells, the glial cells associated with peripheral axons. After a peripheral nerve is crushed, the axons within it degenerate; the basement membrane around each axon, however, persists for months.

One of the major components of the basement membrane is laminin, which (along with other growth promoting molecules in the basement membrane) forms a hospitable environment for regenerating growth cones. The surrounding Schwann cells also react by releasing neurotrophic factors, which further promote axon elongation.

This peripheral environment is so favourable to regrowth that even neurons from the central nervous system can be induced to extend into transplanted segments of peripheral nerve.

implantation

Albert Aguayo and his colleagues at the Montreal General Hospital found that grafts derived from peripheral nerves can act as “bridges” for central nervous system neurons (in this case, retinal ganglion cells), allowing them to grow for over a centimeter (Figure
A); they even form a few functional synapses in their target tissues (Figure B).

These several observations suggest that the failure of central neurons to regenerate is not due to an intrinsic inability to sprout new axons, but rather to something in the local environment that prevents growth cones from extending.

This impediment could be the absence of growth-promoting factors— such as the neurotrophins—or the presence of molecules that actively prevent axon outgrowth.

Studies by Martin Schwab and his colleagues point to the latter possibility. Schwab found that central nervous system myelin contains an inhibitory component that causes growth cone collapse in vitro and prevents axon growth in vivo. This component, recognized by a monoclonal antibody called IN-1, is found in the myelinated portions of the central nervous system but is absent from peripheral nerves.

IN-1 also recognizes molecules in the optic nerve and spinal cord of mammals, but is missing in the same sites in fish, which do regenerate these central tracts.

Nogo-A, the primary antigen recognized by the IN-1 antibody, is secreted by oligodendrocytes(they are present in central nervous system), but not by Schwann cells in the peripheral nervous system. Most dramatically, the IN-1 antibody increases the extent of spinal cord regeneration when provided at the site of injury in rats with spinal cord damage. All this implies that the human central nervous system differs from that of many “lower” vertebrates in that humans and other mammals present an unfavourable molecular environment for regrowth after injury.

Why this state of affairs occurs is not known.

One speculation is that the extraordinary amount of information stored in mammalian brains puts a premium on a stable pattern of adult connectivity.

At present there is only one modestly helpful treatment for CNS injuries such as spinal cord transection. High doses of a steroid, methylprednisolone, immediately after the injury prevents some of the secondary damage to neurons resulting from the initial trauma.

Although it may never be possible to fully restore function after such injuries, enhancing axon regeneration, blocking inhibitory molecules and providing additional trophic support to surviving neurons could in principle allow sufficient recovery of motor control to give afflicted individuals a better quality of life than they now enjoy. The best “treatment,” however, is to prevent such injuries from occurring, since there is now very little that can be done after the fact.

Reference:

  1. Degeneration and Regeneration in the Nervous System  edited by Norman Saunders, Katarzyna Dziegielewska; Anatomy and Physiology, The University of Tasmania, Australia©2000, OPA (Overseas Publishers Association)
  2. Neuroscience, 3rd edition. Editors: Dale Purves, George J Augustine, David Fitzpatrick, Lawrence C Katz, Anthony-Samuel LaMantia, James O McNamara, and S Mark Williams. Sunderland (MA): Sinauer Associates; 2004. ISBN 0-87893-725-0

  3. BRAY, G. M., M. P. VILLEGAS-PEREZ, M. VIDALSANZ AND A. J. AGUAYO (1987) The use of peripheral nerve grafts to enhance neuronal survival, promote growth and permit terminal reconnections in the central nervous system of adult rats. J. Exp. Biol. 132: 5–19.
  4. SCHNELL, L. AND M. E. SCHWAB (1990) Axonal regeneration in the rat spinal cord produced by an antibody against myelin-associated neurite growth inhibitors. Nature 343:269–272.
  5. SO, K. F. AND A. J. AGUAYO (1985) Lengthy regrowth of cut axons from ganglion cells after peripheral nerve transplantation into the retina of adult rats. Brain Res. 359: 402–406.
  6. VIDAL-SANZ, M., G. M. BRAY, M. P. VILLEGASPEREZ, S. THANOS AND A. J. AGUAYO (1987) Axonal regeneration and synapse formation in the superior colliculus by retinal ganglion cells in the adult rat. J. Neurosci. 7: 2894–2909.
Advertisements

Neuroplasticity And Epilepsy: The Effect of Pathological Activity on Neural Circuitry

Epilepsy is a brain disorder characterized by periodic and unpredictable seizures
mediated by the rhythmic firing of large groups of neurons. It seems likely that
abnormal activity generates plastic changes in cortical circuitry that are critical
to the pathogenesis of the disease.

Brain Plasticity: What Is It?

What is brain plasticity?

Does it mean that our brains are made of plastic?

Of course not.

Plasticity, or neuroplasticity, describes how experiences reorganize neural pathways in the brain. Long lasting functional changes in the brain occur when we learn new things or memorize new information. These changes in neural connections are what we call neuroplasticity.

To illustrate the concept of plasticity, imagine the film of a camera. Pretend that the film represents your brain. Now imagine using the camera to take a picture of a tree. When a picture is taken, the film is exposed to new information — that of the image of a tree. In order for the image to be retained, the film must react to the light and “change” to record the image of the tree. Similarly, in order for new knowledge to be retained in memory, changes in the brain representing the new knowledge must occur.

To illustrate plasticity in another way, imagine making an impression of a coin in a lump of clay. In order for the impression of the coin to appear in the clay, changes must occur in the clay — the shape of the clay changes as the coin is pressed into the clay. Similarly, the neural circuitry in the brain must reorganize in response to experience or sensory stimulation.

  • Neuroplasticity includes several different processes that take place throughout a lifetime – Neuroplasticity does not consist of a single type of morphological change, but rather includes several different processes that occur throughout an individual’s lifetime. Many types of brain cells are involved in neuroplasticity, including neurons, glia, and vascular cells.
  • Neuroplasticity has a clear age-dependent determinant – Although plasticity occurs over an individual’s lifetime, different types of plasticity dominate during certain periods of one’s life and are less prevalent during other periods.
  • Neuroplasticity occurs in the brain under two primary conditions:1. During normal brain development when the immature brain first begins to process sensory information through adulthood (developmental plasticity and plasticity of learning and memory).2. As an adaptive mechanism to compensate for lost function and/or to maximize remaining functions in the event of brain injury.
  • The environment plays a key role in influencing plasticity – In addition to genetic factors, the brain is shaped by the characteristics of a person’s environment and by the actions of that same person.

Developmental Plasticity: Synaptic Pruning

Electrical Trigger for Neurotransmission

 

 

Gopnick et al. (1999)[Gopnic, A., Meltzoff, A., Kuhl, P. (1999). The Scientist in the Crib: What Early Learning Tells Us About the Mind, New York, NY: HarperCollins Publishers.] describe neurons as growing telephone wires that communicate with one another. Following birth, the brain of a newborn is flooded with information from the baby’s sense organs. This sensory information must somehow make it back to the brain where it can be processed. To do so, nerve cells must make connections with one another, transmitting the impulses to the brain. Continuing with the telephone wire analogy, like the basic telephone trunk lines strung between cities, the newborn’s genes instruct the “pathway” to the correct area of the brain from a particular nerve cell. For example, nerve cells in the retina of the eye send impulses to the primary visual area in the occipital lobe of the brain and not to the area of language production (Wernicke’s area) in the left posterior temporal lobe. The basic trunk lines have been established, but the specific connections from one house to another require additional signals.
Over the first few years of life, the brain grows rapidly. As each neuron matures, it sends out multiple branches (axons, which send information out, and dendrites, which take in information), increasing the number of synaptic contacts and laying the specific connections from house to house, or in the case of the brain, from neuron to neuron. At birth, each neuron in the cerebral cortex has approximately 2,500 synapses. By the time an infant is two or three years old, the number of synapses is approximately 15,000 synapses per neuron (Gopnick, et al., 1999). This amount is about twice that of the average adult brain. As we age, old connections are deleted through a process called synaptic pruning.

Synaptic pruning eliminates weaker synaptic contacts while stronger connections are kept and strengthened. Experience determines which connections will be strengthened and which will be pruned; connections that have been activated most frequently are preserved. Neurons must have a purpose to survive. Without a purpose, neurons die through a process called apoptosis in which neurons that do not receive or transmit information become damaged and die. Ineffective or weak connections are “pruned” in much the same way a gardener would prune a tree or bush, giving the plant the desired shape. It is plasticity that enables the process of developing and pruning connections, allowing the brain to adapt itself to its environment.

Wiring of Brain

Plasticity of Learning and Memory

It was once believed that as we aged, the brain’s networks became fixed. In the past two decades, however, an enormous amount of research has revealed that the brain never stops changing and adjusting. 

Learning, as defined by Tortora and Grabowski (1996)[Tortora, G. and Grabowski, S. (1996). Principles of Anatomy and Physiology. (8th ed.), New York: HarperCollins College Publishers.], is the ability to acquire new knowledge or skills through instruction or experience.

Memory is the process by which that knowledge is retained over time.

The capacity of the brain to change with learning is plasticity.

So how does the brain change with learning?

According to Drubach (2000)[Drubach, D. (2000). The Brain Explained, Upper Saddle River, NJ: Prentice-Hall, Inc.], there appear to be at least two types of modifications that occur in the brain with learning:

  1. A change in the internal structure of the neurons, the most notable being in the area of synapses.
  2. An increase in the number of synapses between neurons.

Initially, newly learned data are “stored” in short-term memory, which is a temporary ability to recall a few pieces of information. Some evidence supports the concept that short-term memory depends upon electrical and chemical events in the brain as opposed to structural changes such as the formation of new synapses. One theory of short-term memory states that memories may be caused by “reverberating” neuronal circuits — that is, an incoming nerve impulse stimulates the first neuron which stimulates the second, and so on, with branches from the second neuron synapsing with the first. After a period of time, information may be moved into a more permanent type of memory, long-term memory, which is the result of anatomical or biochemical changes that occur in the brain (Tortora and Grabowski, 1996)[Tortora, G. and Grabowski, S. (1996). Principles of Anatomy and Physiology. (8th ed.), New York: HarperCollins College Publishers.]

Kindly also read Does Glial Cells Have Any Role in Creativity and Genius ? Nearly 90 % of the Brain is Composed of Glia.

Injury-induced Plasticity: Plasticity and Brain Repair

During brain repair following injury, plastic changes are geared towards maximizing function in spite of the damaged brain. In studies involving rats in which one area of the brain was damaged, brain cells surrounding the damaged area underwent changes in their function and shape that allowed them to take on the functions of the damaged cells. Although this phenomenon has not been widely studied in humans, data indicate that similar (though less effective) changes occur in human brains following injury. Kindly read Why Home Education of a Child is Very Important- Even Kindergarten is Late!

[Source: https://faculty.washington.edu/chudler/plast.html]

The importance of neuronal plasticity in epilepsy is indicated most clearly by an animal model of seizure production called kindling. To induce kindling, a stimulating electrode is implanted in the brain, often in the amygdala (a component of the limbic system that makes and receives connections with the cortex, thalamus, and other limbic structures, including the hippocampus; amygdala came from the latin-greek word, meaning almond, which describe the almond-like structure found in the brain ); Shown in research to perform a primary role in the processing of memory, decision-making, and emotional reactions, the amygdalae are considered part of the limbic system.

  • It can be found easily in mammals under the rhinal fissure and closely related to the lateral olfactory tract. This almond like structure, ranging from 1-4cm , average about 1.8cm, has extensive connection with the brain.

At the beginning of such an experiment, weak electrical stimulation, in the form of a low-amplitude train of electrical pulses, has no discernible effect on the animal’s behaviour or on the pattern of electrical activity in the brain (laboratory rats or mice have typically been used for such studies). As this weak stimulation is repeated once a day for several weeks, it begins to produce behavioural and electrical indications of seizures. By the end of the experiment, the same weak stimulus that initially had no effect now causes full-blown seizures. This phenomenon is essentially permanent; even after an interval of a year, the same weak stimulus will again trigger a seizure. Thus, repetitive weak activation produces long-lasting changes in the excitability of the brain that time cannot reverse. The word kindling is therefore quite appropriate: A single match can start a devastating fire.

The changes in the electrical patterns of brain activity detected in kindled animals resemble those in human epilepsy. The behavioural manifestations of epileptic seizures in human patients range from mild twitching of an extremity, to loss of consciousness and uncontrollable convulsions. Although many highly accomplished people have suffered from epilepsy (Alexander the Great, Julius Caesar, Napoleon, Dostoyevsky, and Van Gogh, to name a few), seizures of sufficient intensity and frequency can obviously interfere with many aspects of daily life. Moreover, uncontrolled convulsions can lead to excitotoxicity.

Up to 1% of the population is afflicted, making epilepsy one of the most common neurological problems.

Modern thinking about the causes (and possible cures) of epilepsy has focussed on where seizures originate and the mechanisms that make the affected region hyperexcitable.

Most of the evidence suggests that abnormal activity in small areas of the cerebral cortex (called foci) provide the triggers for a seizure that then spreads to other synaptically connected regions. For example, a seizure originating in the thumb area of the right motor cortex will first be evident as uncontrolled movement of the left thumb that subsequently extends to other more proximal limb muscles, whereas a seizure originating in the visual association cortex of the right hemisphere may be heralded by complex hallucinations in the left visual field. The behavioural manifestations of seizures therefore provide important clues for the neurologist seeking to pinpoint the abnormal region of cerebral cortex. Epileptic seizures can be caused by a variety of acquired or congenital factors, including cortical damage from trauma, stroke, tumors, congenital cortical dysgenesis (failure of the cortex to grow properly), and congenital vascular malformations. One rare form of epilepsy, Rasmussen’s encephalitis, is an autoimmune disease that arises when the immune system attacks the brain, using both humoral (i.e. antibodies) and cellular (lymphocytes and macrophages) agents that can destroy neurons. Some forms of epilepsy are heritable, and more than a dozen distinct genes have been demonstrated to underlie unusual types of epilepsy. However, most forms of familial epilepsy (such as juvenile myoclonic epilepsy and petit mal epilepsy) are caused by the simultaneous inheritance of more than one mutant gene.

No effective prevention or cure exists for epilepsy. Pharmacological therapies that successfully inhibit seizures are based on two general strategies.

One approach is to enhance the function of inhibitory synapses that use the neurotransmitter GABA;[Gamma-Aminobutyric acid(γ-Aminobutyric acid); (also called GABA for short) is the chief inhibitory neurotransmitter in the mammalian central nervous system. It plays the principal role in reducing neuronal excitability throughout the nervous system. In humans, GABA is also directly responsible for the regulation of muscle tone.] the other is to limit action potential firing by acting on voltage-gated Na+ channels. Commonly used antiseizure medications include carbamazepine, phenobarbital, phenytoin (Dilantin®), and valproic acid.

These agents, which must be taken daily, successfully inhibit seizures in 60–70% of patients. In a small fraction of patients, the epileptogenic region can be surgically excised. In extreme cases, physicians resort to cutting the corpus callosum to prevent the spread of seizures (most of the “split-brain” subjects were patients suffering from intractable epilepsy).

One of the major reasons for controlling epileptic activity is to prevent the more permanent plastic changes that would ensue as a consequence of abnormal and excessive neural activity.

Brain-corpus collosum

  • THE CORPUS CALLOSUM
    The corpus callosum (Latin for “tough body”) is by far the largest bundle of nerve fibers in the entire nervous system. Its population has been estimated at 200 million axons—the true number is probably higher, as this estimate was based on light microscopy rather than on electron microscopy—
    a number to be contrasted to 1.5 million for each optic nerve and 32,000 for the auditory nerve. Its cross-sectional area is about 700 square millimeters, compared with a few square millimeters for the optic nerve. It joins the two cerebral hemispheres, along with a relatively tiny fascicle of fibers called the anterior commissure, as shown. The word commissure signifies a set of fibers connecting two homologous neural structures on opposite sides of the brain or spinal cord; thus the corpus callosum is sometimes called the great cerebral commissure.
    Until about 1950 the function of the corpus callosum was a complete mystery. On rare occasions, the corpus callosum in humans is absent at birth, in a condition called agenesis of the corpus callosum. Occasionally it may be completely or partially cut by the neurosurgeon, either to treat epilepsy (thus preventing epileptic discharges that begin in one hemisphere from spreading to the other) or to make it possible to reach a very deep tumor, such as one in the pituitary gland, from above. In none of these cases had neurologists and psychiatrists found any deficiency; someone had even suggested (perhaps not seriously) that the sole function of the corpus callosum was to hold the two cerebral hemispheres together. Until the 1950s we knew little about the detailed connections of the corpus callosum. It clearly connected the two cerebral hemispheres, and on the basis of rather crude neurophysiology it was thought to join precisely corresponding cortical areas on the two sides. Even cells in the striate cortex were assumed to send axons into the corpus callosum to terminate in the exactly corresponding part of the striate cortex on the opposite side.
    In 1955 Ronald Myers, a graduate student studying under psychologist Roger Sperry (Roger Wolcott Sperry) at the University of Chicago, did the first experiment that revealed a function for this immense bundle of fibers. Myers trained cats in a box containing two side-by-side screens onto which he could project images, for example a circle onto one screen and a square onto the other. He taught a cat to press its nose against the screen with the circle, in preference to the one with the square, by rewarding correct responses with food and punishing mistakes mildly by sounding an unpleasantly loud buzzer and pulling the cat back from the screen gently but firmly. By this method the cat could be brought to a fairly consistent performance in a few thousand trials. (Cats learn slowly; a pigeon will learn a similar task in tens to hundreds of trials, and we humans can learn simply by being told. This seems a bit odd, given that a cat’s brain is many times the size of a pigeon’s. So much for the sizes of brains.)
    Not surprisingly, Myers’ cats could master such a task just as fast if one eye was closed by a mask. Again not surprisingly, if a task such as choosing a triangle or a square was learned with the left eye alone and then tested with the right eye alone, performance was just as good. This seems not particularly impressive, since we too can easily do such a task. The reason it is easy must be related to the anatomy. Each hemisphere receives input from both eyes, a large proportion of cells in area 17 receive input from both eyes. Myers now made things more interesting by surgically cutting the optic chiasm in half, by a fore-and-aft cut in the midline, thus severing the crossing fibers but leaving the uncrossed ones intact—a procedure that takes some surgical skill. Thus the left eye was attached only to the left hemisphere and the right eye to the right hemisphere. The idea now was to teach the cat through the left eye and test it with the right eye: if it performed correctly, the information necessarily would have crossed from the left hemisphere to the right through the only route known, the corpus callosum. Myers did the experiment: he cut the chiasm longitudinally, trained the cat through one eye, and tested it through the other—and the cat still succeeded. Finally, he repeated the experiment in an animal whose chiasm and corpus callosum had both been surgically divided. The cat now failed. Thus he established, at long last, that the callosum actually could do something—although we would hardly suppose that its sole purpose was to allow the few people or animals with divided optic chiasms to perform with one eye after learning a task with the other.[Source: David Hubel’s-Eye, Brain And Vision]

The corpus callosum is a thick, bent plate of axons near the center of this brain section, made by cutting apart the human cerebral hemispheres and looking at the cut surface. Image source

Here the brain is seen from above. On the right side an inch or so of the top has been lopped off. We can see the band of the corpus callosum fanning out after crossing, and joining every part of the two hemispheres. (The front of the brain is at the top of the picture.) Image source

The Corpus Callosum Defined

Imagine for a moment two people who think and behave in very similar ways yet perceive the world a bit differently from one another.

What if they could share their thoughts, then modify them into a single world view based on both perceptions?

This may seem weird, but our brain works this way thanks to the corpus callosum.

Located near the center of the brain, this structure is the largest bundle of nerve fibers that connects the left and right cerebral hemispheres, much like a bridge. Traffic flows in both directions, but instead of vehicles travelling over the gap, it is information.

Corpus callosum

The corpus callosum is near the center of the brain and is covered by the cerebral hemispheres. Image source

Split Brain Patients

Until the early 1950s, the function of the corpus callosum had alluded scientists. No one knew what it did, except to connect the two cerebral hemispheres. By the 1960s, scientists at least knew that nerve fibers within the callosum connected corresponding areas in the two hemispheres but did not yet understand the complexity involved. However, this limited knowledge was used in an attempt to help patients who suffered from severe and constant seizures.

Normally, electrical activity in the brain flows down specific pathways. This is not so during seizures. The electrical charges could end up anywhere in the brain and stimulate the uncoordinated muscular activity that many people associate with a seizure. Roger Sperry was the scientist who developed a surgical procedure to cut the corpus callosum and stop the spread of this activity from one hemisphere to the other. This procedure was a last ditch effort to normalize the lives of seizure patients, and it was very effective. However, there were a few unexpected results.

After surgery, some patients exhibited contrary behaviours, such as pulling their pants on with one arm while simultaneously pulling them off with the other. Another patient began to shake his wife aggressively with his left hand as his right hand intervened to stop the attack. These results began a plethora of investigations which eventually lead to the understanding that each hemisphere tends to specialize in certain activities, i.e., speech (left side) or emotional reactivity (right side).

After the patient’s callosum was cut, the attack on his wife was instigated by the right hemisphere (via his left hand) because the left hemisphere (right hand) didn’t realize what was happening soon enough to prevent it. Such a conflict ordinarily would have been resolved in the patient’s brain before the external behaviour was produced.

To a greater or lesser degree, both hemispheres contribute to the initiation of a particular behaviour. It is the corpus callosum that provides the communication pathway to coordinate these activities and helps to incorporate them into daily life. Without this brain structure, we literally have two separate personalities in your head, each with its own agenda. [Source: Jay Mallonee – Jay is a wildlife biologist, college professor and writer. His master’s degree is in neurobiology and he has studied animal behaviour since 1976.]

Reference:

  1. SCHEFFER, I. E. AND S. F. BERKOVIC (2003) The genetics of human epilepsy. Trends Pharm. Sci. 24: 428–433.
  2. ENGEL, J. JR. AND T. A. PEDLEY (1997) Epilepsy: A Comprehensive Textbook. Philadelphia: Lippincott-Raven Publishers.
  3. McNamara, J. O. (1999) Emerging insights into the genesis of epilepsy. Nature 399:
    A15–A22.

Neuroscience, 3rd edition

Editors: Dale Purves, George J Augustine, David Fitzpatrick, Lawrence C Katz, Anthony-Samuel LaMantia, James O McNamara, and S Mark Williams.

Sunderland (MA): Sinauer Associates; 2004.
ISBN 0-87893-725-0

Epilepsy: Still an Enigma For Common People


“It starts with the sensation of a light switch being pulled violently behind my eyes. I lose cognitive control quickly. I can’t focus on even a simple task, and I forget what I’m doing while I’m in the middle of doing it. I could pick up a pen, then forget why I’m holding it. As the weight behind my eyes intensifies, my eyes roll into my head and start to flutter so rapidly it feels like they are going to pop out. This can last for a split second, a few hours, a few days, or a week.

I have epilepsy, a neurological disorder characterised by recurring, unprovoked seizures. The episodes I described are seizures — they are simply misfiring neurones. Sometimes, these seizures are affected by the season. Other times, they are more easily triggered by stress.

I had my first Tonic-Clonic Seizure in December 1996, my Grade Six year. I fell unconscious into a snow bank in the parking lot of my elementary school. Many people are more familiar with this seizure’s older name, Grand Mal.

I was diagnosed with Generalized Seizures that same year. This news left my family and teachers confused. I had never shown any signs of what they thought of as epilepsy, the violent shaking on the ground portrayed in movies and on TV. No one realised I had been having seizures for many years. Instead, they misread my childhood behaviour as misbehaving.

I frequently blacked out for split seconds in elementary school. The blackouts were likely Absence Seizures, a type of seizure that looks like daydreaming. Even though the blackouts happened on a regular basis, they were almost impossible to spot with an untrained eye.

In a Grade Three art class, I blacked out and knocked over a cup of water that contained a few paint brushes. At the time, no one realised it was a Partial Seizure. When I came to, my teacher asked me why I had done that. She told me it was a disturbance to the class and I needed to watch my behaviour.

I had never even heard the word “seizure” before the age of 11. Without a reference point, these incidents in school seemed normal to me. As far as I was concerned, I didn’t have seizures; I just needed to control my behaviour so I would stop getting in trouble at school.

There was no information about raising a child with Generalized Seizures available to families living with epilepsy in the late-’90s. My family and my teachers didn’t know anyone with epilepsy who could help them figure it out. There were no community epilepsy agencies in our area at the time. Without available resources, we felt left in the dark.

Without a clear understanding of epilepsy as I grew up, it became difficult for me to talk about it with others in my life. As a young adult, I would often avoid discussing it with boyfriends, new employers, and new friends.” [Source: Undiagnosed Epilepsy Made People Think I Was Acting Out


Globally, one in 100 people are diagnosed with epilepsy. It is one of the most common neurological conditions worldwide, yet public knowledge is extremely limited. Many people with epilepsy never talk publicly about their diagnosis fearing discrimination. Seizures and seizure first aid on television are often inaccurate. Myths and misconceptions about epilepsy persist.

Epilepsy is a chronic disorder characterised by recurrent seizures, which may vary from a brief lapse of attention or muscle jerks to severe and prolonged convulsions. The seizures are caused by sudden, usually brief, excessive electrical discharges in a group of brain cells (neurones). In most cases, epilepsy can be successfully treated with anti-epileptic drugs. [Source: http://www.who.int/topics/epilepsy/en/

 

File:Opisthotonus in a patient suffering from tetanus - Painting by Sir Charles Bell - 1809.jpg

Painting by Sir Charles Bell (1809) showing opisthotonos (in a patient suffering from tetanus.  Opisthotonus (धनुर्वात), Tetanus (धनुस्तम्भ)

Imitators of Epilepsy

  • Fainting (syncope)
  • Mini-strokes (transient ischemic attacks or TIAs)
  • Hypoglycemia (low blood sugar)
  • Migraine with confusion
  • Sleep disorders, such as narcolepsy and others
  • Movement disorders: tics, tremors, dystonia
  • Fluctuating problems with body metabolism
  • Panic attacks
  • Nonepileptic (psychogenic) seizures

 

What is a Seizure?

A seizure is a brief disruption in normal brain activity that interferes with brain function.

The brain is made up of billions of cells called neurones which communicate by sending electrical messages. Brain activity is a rhythmic process characterised by groups of neurones communicating with other groups of neurones. During a seizure, large groups of brain cells send messages simultaneously (known as “hypersynchrony”) which temporarily disrupts normal brain function in the regions where the seizure activity is occurring.

Seizures can cause temporary changes or impairments in a wide range of functions. Any function that the brain has can potentially be affected by a seizure, such as behaviour, sensory perception (vision, hearing, taste, touch, smell), attention, movement, emotion, language function, posture, memory, alertness, and/or consciousness. Not all seizures are the same. Some seizures may only affect one or two discrete functions, other seizures affect a wide range of brain functions.

Most people associate a seizure with a loss of consciousness and rhythmic jerking movements. Some seizures do cause convulsive body movements and a loss of consciousness, but not all. There are many different kinds of seizures. A temporary uncontrollable twitching of a body part could be due to a seizure. A sudden, brief change in feeling or a strange sensation could be due to a seizure.

Most seizures are brief events that last from several seconds to a couple of minutes and normal brain function will return after the seizure ends. Recovery time following a seizure will vary. Sometimes recovery is immediate as soon as the seizure is over. Other types of seizures are associated with an initial period of confusion afterwards. Following some types of seizures, there may be a more prolonged period of fatigue and/or mood changes.

What is the Difference Between a Seizure and Epilepsy?

A seizure is a brief episode caused by a transient disruption in brain activity that interferes with one or more brain functions.

Epilepsy is a brain disorder associated with an increased susceptibility to seizures.

When a person experiences a seizure it does not necessarily indicate that they have epilepsy, there are many possible reasons that a seizure could happen. When someone has been diagnosed with epilepsy it indicates that they have had a seizure (usually 2 or more) and they are considered to have an increased risk of future seizures due to a brain-related cause.

Causes of Epilepsy

Just as there are many different types of epilepsy there are many different causes too, which include:

  • a brain injury or damage to the brain – Anything that can injure the brain is a potential cause of epilepsy including head trauma; stroke; brain injury during birth; neurodegenerative diseases; brain tumours; and many others. Epilepsy may begin weeks, months or years after an injury to the brain.
  • structural abnormalities that arise during brain development – Sometimes these structural changes in the brain are visible on a brain scan (such as an MRI), other times there could be subtle changes in brain structure that are not easy to detect with current imaging techniques. Epilepsy due to a structural abnormality may begin early in life, during adolescence or in adulthood.
  • genetic factors
    Some genetic causes of epilepsy are inherited and there may be other family members with epilepsy, while other genetic factors that cause epilepsy occur at random.
  • a combination of two or more of the above factors
  • Infections, including brain abscess, meningitis, encephalitis, and HIV/AIDS

For many people with epilepsy, the cause of their seizures is unknown. It is hoped that research and new developments in diagnostic testing will provide more answers for people with epilepsy and their families.

Epilepsy that does not get better after two or three anti-seizure drugs have been tried is called “medically refractory epilepsy.” In this case, the doctor may recommend surgery to:

  • Remove the abnormal brain cells causing the seizures.
  • Place a vagal nerve stimulator (VNS). This device is similar to a heart pacemaker. It can help reduce the number of seizures.

Neural Stem Cells – Promise and Perils

One of the most highly publicized issues in biology over the past several years has been the use of stem cells as a possible way of treating a variety of neurodegenerative conditions, including Parkinson’s, Huntington’s, and Alzheimer’s diseases.

Amidst the social, political, and ethical debate set off by the promise of stem cell therapies, an issue that tends to get lost is…

What, exactly, is a stem cell?

Neural stem cells are an example of a broader class of stem cells called somatic stem cells. These cells are found in various tissues, either during development or in the adult.

All somatic stem cells share two fundamental characteristics:

  1. they are self-renewing, and
  2. upon terminal division and differentiation they can give rise to the full range of cell classes within the relevant tissue.

Thus, a neural stem cell can give rise to another neural stem cell or to any of the main cell classes found in the central and peripheral nervous system (inhibitory and excitatory neurons, astrocytes, and oligodendrocytes; Figure A).

A neural stem cell is therefore distinct from a progenitor cell, which is incapable of continuing self-renewal and usually has the capacity to give rise to only one class of differentiated progeny.

  • An oligodendroglial progenitor, for example, continues to give rise to oligodendrocytes until it’s mitotic capacity is exhausted;
  • a neural stem cell, in contrast, can generate more stem cells as well as a full range of differentiated neural cell classes, presumably indefinitely.

Neural stem cells, and indeed all classes of somatic stem cells, are distinct from embryonic stem cells.

Embryonic stem cells (also known as ES cells) are derived from pre-gastrula embryos. ES cells also have the potential for infinite self-renewal and can give rise to all tissue and cell types throughout the organism including germ cells that can generate gametes (recall that somatic stem cells can only generate tissue specific cell types). Stem cells

There is some debate about the capacity of somatic stem cells to assume embryonic stem cell properties.

Some experiments with hematopoetic and neural stem cells indicate that these cells can give rise to appropriately differentiated cells in other tissues; however, some of these experiments have not been replicated.

The ultimate therapeutic promise of stem cells—neural or other types—is their ability to generate newly differentiated cell classes to replace those that may have been lost due to disease or injury.

Such therapies have been imagined for some forms of diabetes (replacement of islet cells that secrete insulin) and some hematopoetic diseases. In the nervous system, stem cell therapies have been suggested for replacement of dopaminergic cells lost to Parkinson’s disease and replacing lost neurons in other degenerative disorders.

While intriguing, this projected use of stem cell technology raises some significant perils.

  • These include insuring the controlled division of stem cells when introduced into mature tissue, and
  • identifying the appropriate molecular instructions to achieve differentiation of the desired cell class.

Clearly, the latter challenge will need to be met with a fuller understanding of the signalling and transcriptional regulatory steps used during development to guide differentiation of relevant neuron classes in the embryo.

At present, there is no clinically validated use of stem cells for human therapeutic applications in the nervous system. Nevertheless, some promising work in mice and other experimental animals indicates that both somatic and ES cells can acquire distinct identities if given appropriate instructions in vitro (i.e., prior to introduction into the host), and if delivered into a supportive host environment.

For example, ES cells grown in the presence of platelet-derived growth factor, which biases progenitors toward glial fates, can generate oligodendroglial cells that can myelinate axons in myelindeficient rats. Similarly, ES cells pretreated with retinoic acid matured into motor neurons when introduced into the developing spinal cord (Figure below).

While such experiments suggest that a combination of proper instruction and correct placement can lead to appropriate differentiation, there are still many issues to be resolved before the promise of stem cells for nervous system repair becomes a reality.

Schematic of the injection of fluorescently labeled embryonic stem (ES) cells into the spinal cord of a host chicken embryo.

ES cells integrate into the host spinal cord and apparently extendaxons.

the progeny of the grafted ES cells are seen in the ventral horn of the spinal cord. They have motor neuron-like morphologies, and their axons extend into the ventral root. (From Wichterle et al., 2002.)

Source: Neuroscience, 3rd edition
Editors: Dale Purves, George J Augustine, David Fitzpatrick, Lawrence C Katz, Anthony-Samuel LaMantia, James O McNamara, and S Mark Williams.
Sunderland (MA): Sinauer Associates; 2004.
ISBN 0-87893-725-0

Microglia – Trying to Set a New Paradigm in the Realm of Brain’s Development, Homeostasis & Diseases

Microglia Cells Accelerate Damage from Retinitis Pigmentosa, Other Blinding Eye Diseases; Finding May Suggest Entirely New Therapeutic Strategies; Targeting Microglia May Complement Gene Therapy for RP http://www.bioquicknews.com/node/2770

The traditional role of microglia has been in brain infection and disease, phagocytosing debris and secreting factors to modify disease progression.

Recent evidence extends their role to healthy brain homoeostasis, including the regulation of cell death, synapse elimination, neurogenesis and neuronal surveillance. These actions contribute to the maturation and plasticity of neural circuits that ultimately shape behaviour.

Gardeners know that some trees require regular pruning: some of their branches have to be cut so that others can grow stronger.

The same is true of the developing brain: cells called microglia prune the connections between neurones, shaping how the brain is wired.[https://www.embl.de/aboutus/communication_outreach/media_relations/2011/110721_Monterotondo/]

Electron micrograph showing the interaction of microglia (immunolabeled with and antibody to Iba1) and synaptic elements. Blue: axon terminal; Purple: dendrite and dendritic spine; Pink: microglia; Green: astrocyte. Scale bar: 250nm. http://www.urmc.rochester.edu/labs/majewska-lab/projects/microglial_function_in_the_healthy_brain

Microglia, the immune cells of the brain, have long been the underdogs of the glia world, passed over for other, flashier cousins, such as astrocytes. Although microglia are best known for being the brain’s primary defenders, scientists now realise that they play a role in the developing brain and may also be implicated in developmental and neurodegenerative disorders.

The change in attitude is clear, as evidenced by the buzz around this topic at this year’s Society for Neuroscience (SfN) conference, which took place from October 17 to 21 in Chicago (2015), where scientists discussed their role in both health and disease.

Activated in the diseased brain, microglia finds injured neurones and strip away the synapses, the connections between them. These cells make up around 10 percent of all the cells in the brain and appear during early development.

For decades scientists focussed on them as immune cells and thought that they were quiet and passive in the absence of an outside invader. That all changed in 2005 when experimenters found that microglia were actually the fastest-moving structures in a healthy adult brain. Later discoveries revealed that their branches were reaching out to surrounding neurones and contacting synapses. These findings suggested that these cellular scavengers were involved in functions beyond disease.

Microglial dynamics           https://www.youtube.com/watch?v=XPsGiTVNVnU

(In vivo imaging of microglial dynamics in Cx3CR1-GFP mice. Images were taken 5 minutes apart. Notice that dynamic microglial processes constantly explore the brain environment.)

syn

Types of Synapses Image source

                                 

The Brain’s Sculptors

The discovery that microglia were active in the healthy brain jump-started the exploration into their underlying mechanisms:

Why do these cells hang around synapses?

And what are they doing?

For reasons scientists don’t yet understand, the brain begins with more synapses than it needs. “As the brain is making it’s [connections], it’s also eliminating them,” says Cornelius Gross, a neuroscientist at the European Molecular Biology Laboratory. Microglia a critical to this process, called pruning: they gobble up synapses, thus helping to sculpt the brain by eliminating unwanted connections.

But how do microglia know which synapses to get rid of and which to leave alone?

New evidence suggests that a protective tag that keeps healthy cells from being eaten by the body’s immune system may also shield against microglial activity in the brain. Emily Lehrman, a doctoral candidate in neuroscientist Beth Stevens’s laboratory at Boston’s Children’s Hospital, presented these unpublished findings at this year’s SfN (Society for Neuroscience (SfN) conference).

“The [protective tag]’s receptor is highly expressed in microglia during peak pruning,” Lehrman says. Without an abundance of this receptor, the tag is unable to protect the cells, leading to excess engulfment by microglia and overpruning of neuronal connections.

 But pruning is not always a bad thing. Other molecules work to ensure that microglia removes weak connections, which can be detrimental to brain function.

Cornelius Gross, a neuroscientist at the European Molecular Biology Laboratory, and his research group have been investigating the activity of fractalkine, a key molecule in neuron-microglia signalling whose receptors are found exclusively on microglia. “Microglia mature in a way that matches synaptogenesis, which sets up the hypothesis that neurones are calling out to microglia during this period,” Gross says.

His lab found that removing the receptor for fractalkine created an overabundance of weak synaptic contacts caused by deficient synaptic pruning during development in the hippocampus, a brain area involved in learning and memory. These pruning problems led to decreased functional connectivity in the brain, impaired social interactions and increased repetitive behaviour—all telltale signs of autism. Published last year in Nature Neuroscience, this work was also presented at the conference.

When Pruning Goes Awry

Studies have also found evidence for increased microglial activation in individuals with schizophrenia and autism; however, whether increased microglial activity is a cause or effect of these diseases is unclear. “We still need to understand whether pruning defects are contributing to these developmental disorders,” Stevens says.

Some findings are emerging from studies on Rett syndrome, a rare form of autism that affects only girls. Dorothy Schafer, now at the University of Massachusetts Medical School, studied microglia’s role in Rett syndrome while she was a postdoctoral researcher in Stevens’s lab. Using mice with mutations in MECP2, the predominant cause of the disease, she found that while microglia were not engulfing synapses during early development, the phagocytic capacity (or the gobbling ability) of these cells increased during the late stages of the disease. These unpublished results suggest that microglia were responding secondarily to a sick environment and partially resolve a debate going on about what microglia do in Rett syndrome—in recent years some studies have shown that microglia can arrest the pathology of disease, whereas others have indicated that they cannot. “Microglia are doing something, but in our research, it seems to be a secondary effect,” Shafer says. “What’s going on is still a huge mystery.”

Activated microglial cells (red) among a GFP-expressing neurone and astrocyte (green) in a rat hippocampal tissue slice. http://keck.bioimaging.wisc.edu/lecture-series-2006-2007.html

Return of the Pruning Shears

As the resident immune cells, microglia act as sentinels, sensing and removing disturbances in the brain. When the brain is exposed to injury or disease, microglia surrounds the damaged areas and eat up the remains of dying cells. In Alzheimer’s disease, for example, microglia are often found near the sites of beta-amyloid deposits, the toxic clumps of misfolded proteins that appear in the brain of affected people. On one hand, microglia may delay the progression of the disease by clearing cellular debris. But it is also possible that they are contributing to disease.

Early synapse loss is a hallmark of many neurodegenerative disorders. Growing evidence points to the possibility that microglial pruning pathways seen in early development may be reactivated later in life, leading to disease. Unpublished data from Stevens’s lab presented at the conference suggest that microglia are involved in the early stages of Alzheimer’s and that blocking microglia’s effects could reduce the synapse loss seen in Huntington’s disease.

As a newly burgeoning field, there are still more questions than answers. Next year’s conference is likely to bring us closer to understanding what these dynamic cells are doing in the brain.

Once the underdogs, microglia may be the key to future therapeutics for a wide variety of psychiatric and neurodegenerative disorders.

Source: Rise of the Microglia By Diana Kwon | October 23, 2015


Microglia

Microglia represent the endogenous brain defence and immune system, which is responsible for CNS protection against various types of pathogenic factors. Microglial cells derive from progenitors that have migrated from the periphery and are from the mesodermal/ mesenchymal origin. During postnatal development, they immigrate into the brain commonly until postnatal day 10 in rodents. After invading the CNS, microglial precursors disseminate relatively homogeneously throughout the neural tissue and acquire a specific phenotype, which clearly distinguishes them from their precursors, the blood-derived monocytes.

The ´resting´ microglia are the fastest moving cells in the brain.

Under physiological conditions microglia in the CNS exist in the ramified or what was generally termed the ‘resting’ state. The resting microglial cell is characterised by a small cell body and much elaborated thin processes, which send multiple branches and extend in all directions. Similar to astrocytes, every microglial cell has its own territory, about 15 –  30 µm wide; there is very little overlap between neighbouring territories. The processes of resting microglial cells are constantly moving through its territory; this is a relatively rapid movement with a speed of about 1.5 µm/min and thus microglial processes represent the fastest moving structures in the brain. At the same time microglial processes also constantly send out and retract small protrusions, which can grow and shrink by 2–3 µm/min. The microglia seems to be randomly scanning through their domains. Recent studies, however, have demonstrated that these processes rest for periods of minutes at sites of synaptic contacts. Considering the velocity of this movement, the brain parenchyma can be completely scanned by microglial processes every several hours.

The motility of the processes is not affected by neuronal firing, but it is sensitive to activators (ATP and its analogues) and inhibitors of purinoceptors.

Focal neuronal damage induces a rapid and concerted movement of many microglial processes towards the site of lesion, and within less than an hour the latter can be completely surrounded by these processes. This injury-induced motility is also governed, at least in part, by activation of purinoceptors; it is also sensitive to the inhibition of gap junctions, which are present in astrocytes, but not in microglia; inhibition of gap junctions also affects the physiological motility of astroglial processes. Therefore, it appears that astrocytes signal to the microglia by releasing ATP (and possibly some other molecules) through connexin hemichannels.

All in all, microglial processes act as a very sophisticated and fast scanning system. This system can, by virtue of receptors residing in the microglial cell plasmalemma (plasma membrane), immediately detect injury and initiate the process of active response, which eventually triggers the full blown microglial activation.

Activation of microglia

When a brain insult is detected by microglial cells, they launch a specific program that results in the gradual transformation of resting, ramified microglia into an ameboid form; this process is generally referred to as ‘microglial activation’ and proceeds through several steps.

During the first stage of microglial activation resting microglia retract their processes, which become fewer and much thicker, increase the size of their cell bodies, change the expression of various enzymes and receptors, and begin to produce immune response molecules. Some microglial cells return into a proliferative mode, and microglial numbers around the lesion site start to multiply. Microglial cells become motile and using amoeboid-like movements they gather around sites of insult. If the damage persists and CNS cells begin to die, microglial cells undergo further transformation and become phagocytes. This is, naturally, a rather sketchy account of the complex and highly coordinated changes which occur in microglial cells; the process of activation is gradual and most likely many sub-states exist on the way from resting to phagocytic microglia. Furthermore, activated microglial cells may display quite heterogeneous properties in different types of pathologies and in different parts of the brain.

The precise nature of the initial signal that triggers the process of microglial activation is not fully understood; it may be associated either with the withdrawal of some molecules (the ‘off-signal’) released during normal CNS activity or by the appearance of abnormal molecules or abnormal concentrations of otherwise physiologically present molecules (on-signal). Both types of signalling can provide microglia with relevant information about the status of brain parenchyma within their territorial domain.

The ‘off-signals’ that may indicate deterioration in neural networks are not yet fully characterised. A good example of this type of communication are neurotransmitters. Microglial cells express a variety of the classical neurotransmitter receptors such as receptors for GABA, glutamate, dopamine, noradrenaline. In most cases, activation of the receptors counteracts the activation of microglial cells with respect to acquiring a pro-inflammatory phenotype. One might speculate that depression of neuronal activity could affect neighbouring microglia, turning them into an ‘alerted’ state. In fact, these ‘off-signals’ allow microglia to sense disturbance even if the nature of the damaging factor cannot be identified.

The ‘on-signalling’ is conveyed by a wide array of molecules, either associated with cell damage or with foreign matter invading the brain. In particular, damaged neurones can release high amounts of ATP, cytokines, neuropeptides, growth factors. Many of these factors can be sensed by microglia and trigger activation. It might well be that different molecules can activate various subprogrammes of this routine, regulate therefore the speed and degree of microglial activation. Some of these molecules can carry both ‘off’ and ‘on’ signals:


 

for example, low concentrations of ATP may be indicative of normal ongoing synaptic activity, whereas high concentrations signal cell damage.


 

Microglia are also capable of sensing disturbances in brain metabolism: for example, accumulation of ammonia, which follows grave metabolic failures (e.g. during hepatic encephalopathy) can activate microglial cells either directly or via intermediates such as NO (Nitric oxide) or ATP.

Spidery microglia, in red. From Kettenmann et al., Neuron 2013 http://phenomena.nationalgeographic.com/2013/01/11/best-cells-ever/

FUNCTIONS OF MICROGLIA

Migration and motility

Microglial migration is essential for many pathophysiological processes, including immune defence and wound healing.

Microglial cells exhibit two types of movement activity:

in the ramified (“resting”) form, they actively move their processes without translocation of the cell body as was already described above.

In the amoeboid form, microglial cells not only move their processes but in addition, the entire cell can migrate through the brain tissue. Microglial migration occurs in development when invading monocytes disseminate through the brain.

Another type of migration is triggered by a pathologic insult when ramified microglia undergoes activation, transform into the amoeboid form and migrate to the site of injury. There are many candidate molecules which may serve as pathological signals and initiate microglial migration and act as chemoattractant molecules. These molecules include ATP, cannabinoids, chemokines, lysophosphatidic acid and bradykinin. The actual movement of microglial cells involves redistribution of salt and water and various ion channels and transporters important for this process. In particular, K+ channels, Cl channels, Na+/H+ exchanger, Cl/HCO3 exchanger, and Na+/HCO3 co-transporter contribute to microglial motility and migration.

ga

The influence of gold surface texture on microglia morphology and activation Microglia seeded on glass, ultra-flat gold (UF-Au), ultra-thin (UT) nanoporous gold (np-Au) np-Au and np-Au monolith were adherent to all surfaces and their viability was not compromised as assessed by multiple toxicity assays. http://pubs.rsc.org/en/Content/ArticleLanding/2014/BM/C3BM60096C#!divAbstract

Phagocytosis

Microglial cells are the professional innate phagocytes of the CNS tissue.

This function is important for the normal brain, during brain development; in pathology and regeneration.

In the CNS development, microglial phagocytosis is instrumental in removing apoptotic cells and may be involved in synapse removal during development and potentially in pruning synapses in the postnatal brain. Microglial phagocytosis is intimately involved in many neurological diseases. In response to the lesion, microglial cells accumulate at the damaged site and remove cellular debris or even parts of damaged cells.

Through phagocytosis, microglial cells can also accumulate various pathological factors such as for example beta-amyloid in Alzheimer’s disease or myelin fragments in demyelinating diseases. Multiple factors, receptors and signalling cascades can regulate the phagocytic activity. In particular, microglial phagocytosis is controlled by purinoceptors; the metabotropic P2Y6 receptors stimulate whereas ionotropic P2X7 receptors inhibit phagocytotic activity. Microglial phagocytosis is also controlled by the glial-derived neurotrophic factor, by the ciliary neurotrophic factor, by TOLL receptors, by prostanoid receptor etc.

40035_2012_Article_9_Fig1_HTML

Age-primed microglia hypothesis of Parkinson’s disease. Microglia functions differentially in the young (left) and aged (right) brain. Left: when facing pathogenic stimuli (large black dots), the healthy microglia in the young brain respond by releasing neurotrophic factors (small yellow dots) to support the endangered dopaminergic neurones and limit neuronal damages. Right: in the aged brain oxidative stress and inflammatory factors (small black dots), which damage the vulnerable dopaminergic neurones and eventually lead to neurodegeneration. (From Luo et al.,2010 with permission). Image source

Antigen presentation

Microglial cells are the dominant antigen presenting cells in the central nervous system. Under resting conditions the expression of the molecular complex for presenting antigen, the major histocompatibility complex II (MHCII) and co-stimulatory molecules such as CD80, CD86 and CD40 is below detection. Upon injury, the molecules are highly upregulated and the expression of this complex is essential for interacting with T lymphocytes. This up-regulation has been described in a number of pathologies and is well studied in Multiple Sclerosis. Microglial cells phagocytose myelin, degrade it and present peptides of the myelin proteins as antigens. By releasing cytokines such as CCl2 microglial cells are important for recruiting leukocytes into the CNS. Microglia interacts with infiltrating T lymphocytes, and thus, mediate the immune response in the brain. They have the capacity to stimulate proliferation of both TH1- and TH2-CD4 positive T cells.

Source: http://www.networkglia.eu/en/microglia

microglial and mo-MΦ functions – cascade of events. (a) Resident microglia originates from yolk sac macrophages that repopulate CNS parenchyma during early development and are self-renewed locally, independent from bone marrow-derived monocytes, by the proliferation of primitive progenitors. (b) In the steady state, microglia are constantly scanning their environment through their highly motile processes. These cells facilitate the maintenance of synapses (c) and neurogenesis (d), as well as secrete growth factors essential for normal CNS performance (e). Upon recognition of a danger signal, microglia retracts their branches, become round and ameboid, and convert into an activated mode (f). A short or moderate signal directs microglia toward a neuroprotective phenotype; these cells clear debris by phagocytosis (g), secrete growth factors associated with remyelination (h) and support regeneration (i). Intensive acute or chronic activation renders microglia neurotoxic; under such conditions, microglia fails to acquire a neuroprotective phenotype. Instead, these cells produce reactive oxygen species (ROS), nitric oxide (NO), proteases, and pro-inflammatory cytokines such as IL-1, IL-6, and TNF-α, all of which endanger neuronal activity (j). Microglial malfunction results in the recruitment of mo-MΦ to the damage site (k). mo-MΦ secrete anti-inflammatory cytokines such as IL-10 and TGF-β, express factors associated with an immune resolution such as mannose receptor and arginase 1 (ARG1), and promote neuroprotection and cell renewal (l), all of which are functions that cannot be provided, under these conditions, by the resident microglia. http://journal.frontiersin.org/article/10.3389/fncel.2013.00034/full

Signalling pathways implicated in the phagocytosis of neurons and neuronal structures.

Signalling pathways implicated in the phagocytosis of neurones and neuronal structures. Microglial phagocytosis of neurones is regulated by the neuronal presentation and microglial recognition of ‘eat-me’ (left) and ‘don’t eat-me’ (right) signals. However, note that the utilisation of different signals, opsonins and receptors is dependent on the specific (patho)physiological context. Neuronal eat-me signals are recognised by microglial phagocytic receptors either directly or following their binding by opsonins, which are in turn recognised by microglial receptors. Phosphatidylserine that is exposed on neurons can be bound by the opsonins milk fat globule-EGF factor 8 (MFG-E8), growth arrest-specific protein 6 (GAS6) or protein S, which can induce phagocytosis by binding to and activating a vitronectin receptor (VNR) (in the case of MFG-E8) or MER receptor tyrosine kinase (MERTK) (in the case of GAS6 or protein S). Note that stimulation of MERTK can also occur downstream of VNR activation (dashed arrow). Alternatively, neuron-exposed phosphatidylserine may directly bind to brain-specific angiogenesis inhibitor 1 (BAI1) on microglia, and neuron-exposed calreticulin or neuron-bound C1q can induce phagocytosis by activating the microglial low-density lipoprotein receptor-related protein (LRP). C1q can also bind to glycoproteins from which sialic acid residues have been removed by the enzyme neuraminidase. C1q deposition on desialylated glycoproteins, in turn, leads to the conversion of C3 to the opsonin C3b, which activates neuronal phagocytosis via the microglial complement receptor 3 (CR3) and its signalling partner DNAX-activation protein 12 (DAP12). By contrast, neuronal don’t eat-me signals inhibit phagocytosis and can in some instances also suppress inflammation. Neuronal CD47 and sialylated glycoproteins inhibit phagocytosis of neurones by binding to the microglial receptors signal regulatory protein 1α (SIRP1α) and sialic acid binding immunoglobulin-like lectins (SIGLECs), respectively.

Microglial phagocytosis of live cells and neuronal structures.

Microglial phagocytosis of live cells and neuronal structures. The figure illustrates situations in which phagocytic recognition leads to the removal of neuronal structures (synapses and neurites) or live cells (glioma cells, neutrophils, neuronal precursors and stressed-but-viable neurones) in the CNS. The shown pathways have been implicated in mediating phagocytic recognition of each target, but other signals may contribute to these processes. a | During development as well as in the adult animal, weak synapses are removed through a process that is dependent on the complement components C1q and C3 and the microglial complement receptor 3 (CR3). b | During development, microglia phagocytose live neuronal progenitors, and this involves the local release of reactive oxygen and nitrogen species (RONS) by microglia. RONS may induce caspase 3 activation in the targeted neuronal progenitor, which may be phagocytosed via the CR3-subunit CD11b and the adaptor protein DNAX-activation protein 12 (DAP12). c | During brain pathology, sub-toxic neuronal insults (such as inflammation, oxidative stress, excessive levels of glutamate or energy depletion) can induce the reversible exposure of the neuronal eat-me signal phosphatidylserine. Phosphatidylserine is recognised by the opsonin milk fat globule-EGF factor 8 (MFG-E8), which induces phagocytosis through activation of the microglial vitronectin receptor (VNR). In addition, MER receptor tyrosine kinase (MERTK)9 also contributes to phagocytic signalling under these circumstances, either through its activation downstream of VNR or through the recognition of unidentified opsonins or eat-me signals. d,e | In addition to the removal of neurones, neuronal precursors and neuronal structures, microglia can phagocytose live neutrophils through activation of the microglial VNR and lectins, or live glioma cells19 through microglial sialic acid binding immunoglobulin-like lectin-H (SIGLEC-H) and DAP12.

Mechanisms mediating microglial phagocytosis of stressed-but-viable neurons during inflammation.

Mechanisms mediating microglial phagocytosis of stressed-but-viable neurones during inflammation. Activation of microglial Toll-like receptor 2 (TLR2) and TLR4 by damage- or pathogen-associated molecules or by amyloid-β results in the release of reactive oxygen and nitrogen species (RONS) derived from inducible nitric oxide synthase (iNOS) and NADPH oxidase (PHOX). RONS can cause nearby neurones to expose phosphatidylserine in a reversible manner on their surface5 through the stimulation of a phosphatidylserine scramblase (probably a TMEM16 protein26) and/or the inhibition of a phosphatidylserine translocase (probably type 4 P-type ATPases (P4-ATPases) ATP8A1 or ATP8A2 ). Neuronal stress induced by activated microglia or by other means may also cause exposure of phosphatidylserine on stressed-but-viable neurons via calcium- or RONS-mediated activation of a scramblase or inhibition of a translocase, via ATP depletion-induced inhibition of translocase or, in some circumstances, via caspase-mediated activation of a distinct scramblase (probably XK-related protein 8 (XKR8)). Exposed phosphatidylserine is bound by milk fat globule-EGF factor 8 (MFG-E8), which is released by activated microglia and astrocytes and which promotes phagocytosis of the phosphatidylserine-tagged neurone through the vitronectin receptor (VNR). The VNR may drive phagocytosis by triggering actin polymerization in synergy with MER receptor tyrosine kinase (MERTK). MERTK is upregulated by microglia activation and may also bind to neurones via opsonins that bind exposed phosphatidylserine or other eat-me signals.


—Summary Of Evidence —

That Molecular Pathways Characterised In Pathology Are Also Utilised By MICROGLIA —-

In The Normal And Developing Brain To Influence Synaptic Development And Connectivity,

And Therefore Should Become Targets Of Future Research

Microglia: New Roles for the Synaptic Stripper

Neurone: Volume 77, Issue 1, 9 January 2013, Pages 10–18

Helmut Kettenmann(1), Frank Kirchhoff(2), Alexei Verkhratsky(3, 4)

  • 1 Max-Delbrück-Center for Molecular Medicine, 13125 Berlin, Germany
  • 2 Department of Molecular Physiology, University of Saarland, 66424 Homburg, Germany
  • 3 Faculty of Life Sciences, The University of Manchester, Manchester M13 9PL, UK
  • 4 Achucarro Center for Neuroscience, Ikerbasque, Basque Foundation for Science, 48011 Bilbao, Spain

For more details

Physiology of Microglia

Helmut Kettenmann, Uwe-Karsten Hanisch, Mami Noda, Alexei Verkhratsky
  1. Matthew R. Ritter, Eyal Banin, Stacey K., Moreno, Edith Aguilar, Michael I, Dorrell, and Martin Friedlander.  Myeloid progenitors differentiate into microglia and promote vascular repair in a model of ischemic retinopathy: J Clin Invest. 2006 Dec 1; 116(12): 3266–3276. Vision loss associated with ischemic diseases such as retinopathy of prematurity and diabetic retinopathy are often due to retinal neovascularization. While significant progress has been made in the development of compounds useful for the treatment of abnormal vascular permeability and proliferation, such therapies do not address the underlying hypoxia that stimulates the observed vascular growth. Using a model of oxygen-induced retinopathy, we demonstrate that a population of adult BM–derived myeloid progenitor cells migrated to avascular regions of the retina, differentiated into microglia, and facilitated normalisation of the vasculature. Myeloid-specific hypoxia-inducible factor 1α (HIF-1α) expression was required for this function, and we also demonstrate that endogenous microglia participated in retinal vascularization. These findings suggest what we believe to be a novel therapeutic approach for the treatment of ischemic retinopathies that promotes vascular repair rather than destruction.
  2. Judy Choi, Qingdong Zheng, Howard E. Katz, Tomás R., Guilarte. Silica-Based Nanoparticle Uptake and Cellular Response by Primary Microglia: Environ Health Perspect 118:589-595 (2010). http://dx.doi.org/10.1289/ehp.0901534 [online 21 December 2009]    Silica nanoparticles (SiNPs) are being formulated for cellular imaging and for nonviral gene delivery in the central nervous system (CNS), but it is unclear what potential effects SiNPs can elicit once they enter the CNS. As the resident macrophages of the CNS, microglia are the cells most likely to respond to SiNP entry into the brain. Upon activation, they are capable of undergoing morphological and functional changes.  This is the first study demonstrating the in vitro effects of SiNPs in primary microglia. Our findings suggest that very low levels of SiNPs are capable of altering microglial function. Increased reactive oxygen species (ROS) and reactive nitrogen species (RNS) production, changes in proinflammatory genes, and cytokine release may not only adversely affect microglial function but also affect surrounding neurones.
  3. Yang Zhan, Rosa C Paolicelli, Francesco Sforazzini et al. Deficient neuron-microglia signalling results in impaired functional brain connectivity and social behaviour: Nature Neuroscience; March 2104; vol. 17; number 3; pg 400-406. In summary, our findings reveal a role for microglia in promoting the maturation of circuit connectivity during development, with synapse elimination going hand in hand with enhanced synaptic multiplicity. Our data support the hypothesis that microglia-mediated synaptic pruning during development has a critical role in sculpting neural circuit function, which may contribute to the physiological and behavioural features of a range of neurodevelopmental disorders. Our data also opens the possibilty that genetic and environmental risk factors for such disorders may exert their effect by modulating synapse elimination. Further studies are warranted to test the hypothesis that variation in synaptic pruning may underlie individual differences in human brain wiring.
  4. Juan I. Rodriguez, Janet K. Kern. Evidence of microglial activation in autism and its possible role in brain underconnectivity. Neuron Glia Biol. 2011 May; 7(2-4): 205–213. Evidence indicates that children with autism spectrum disorder (ASD) suffer from an ongoing neuroinflammatory process in different regions of the brain involving microglial activation. When microglia remain activated for an extended period, the production of mediators is sustained longer than usual and this increase in mediators contributes to loss of synaptic connections and neuronal cell death. Microglial activation can then result in a loss of connections or underconnectivity. Underconnectivity is reported in many studies in autism. One way to control neuroinflammation is to reduce or inhibit microglial activation. It is plausible that by reducing brain inflammation and microglial activation, the neurodestructive effects of chronic inflammation could be reduced and allow for improved developmental outcomes. Future studies that examine treatments that may reduce microglial activation and neuroinflammation, and ultimately help to mitigate symptoms in ASD, are warranted.

    An external file that holds a picture, illustration, etc.Object name is S1740925X12000142_fig1.jpg

    This diagram shows the relationships and interplay between microglial activation and the neuropathology, medical issues and symptoms in ASD. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3523548/figure/fig01

  5. Anat London, Merav Cohen and Michal Schwartz. Microglia and monocyte-derived macrophages: functionally distinct populations that act in concert in CNS plasticity and repair.  Front. Cell. Neurosci., 08 April 2013 | http://dx.doi.org/10.3389/fncel.2013.00034
  6. Carmelina Gemma and Adam D. Bachstetter. The role of microglia in adult hippocampal neurogenesis. Front. Cell. Neurosci., 22 November 2013 | http://dx.doi.org/10.3389/fncel.2013.00229
  7. Sabine Hellwig, Annette Heinrich and Knut Biber. The brain’s best friend: microglial neurotoxicity revisited. Front. Cell. Neurosci., 16 May 2013 | http://dx.doi.org/10.3389/fncel.2013.00071
  8. Peter Thériault, Ayman ElAli and Serge Rivest. The dynamics of monocytes and microglia in Alzheimer’s disease. Alzheimer’s Research & Therapy 2015, 7:41 doi:10.1186/s13195-015-0125-2      The present review summarises current knowledge on the role of monocytes and microglia in AD and how these cells can be mobilised to prevent and treat the disease.
  9. Guy C. Brown & Jonas J. Neher. Microglial phagocytosis of live neurones. Nature Reviews Neuroscience 15, 209–216 (2014) doi:10.1038/nrn3710
    Published online 20 March 2014.
  10. Neuropathology. APPLIED NEUROCYTOLOGY AND BASIC REACTIONS. Dimitri P. Agamanolis, M.D. 
  11. Rosa C. Paolicelli, Giulia Bolasco, Francesca Pagani et al. Synaptic Pruning by Microglia Is Necessary for Normal Brain Development. Published Online July 21, 2011. Science 9 September 2011: Vol. 333 no. 6048 pp. 1456-1458
    DOI: 10.1126/science.1202529
  12. Axel Nimmerjahn, Frank Kirchhoff, Fritjof Helmchen. Resting Microglial Cells Are Highly Dynamic Surveillants of Brain Parenchyma in Vivo. Published Online April 14, 2005. Science 27 May 2005: Vol. 308 no. 5726 pp. 1314-1318. DOI: 10.1126/science.1110647
  13. Hiroaki Wake, Andrew J. Moorhouse, Shozo Jinno, Shinichi Kohsaka, and Junichi Nabekura. Resting Microglia Directly Monitor the Functional State of Synapses In Vivo and Determine the Fate of Ischemic Terminals. The Journal of Neuroscience, 1 April 2009, 29(13): 3974-3980; doi: 10.1523/JNEUROSCI.4363-08.2009. In summary, we report direct, activity-dependent connections between microglia and synapses and have quantified the kinetics of this interaction. These interactions are very consistent in the control brain but markedly prolonged in the ischemic brain. We propose that microglia detects the functional state of the synapses, responding with prolonged contact under pathological conditions. Such prolonged contact is followed on some occasions by synapse elimination, suggesting that microglia are perhaps either attempting to restore synapse function or initiating their subsequent removal (as illustrated schematically in supplemental Fig. 3, available at www.jneurosci.org as supplemental material). Such microglial diagnosis of synapse function is likely to be important in any subsequent remodelling of neuronal circuits after brain injury and provides a novel target for the development of new therapies to improve the recovery of neuronal function after brain damage.
  14. Sarah E. Schipul, Timothy A. Keller, and Marcel Adam Just. Inter-Regional Brain Communication and It’s Disturbance in Autism. Front Syst Neurosci. 2011; 5: 10.
    Published online 2011 Feb 22. doi: 10.3389/fnsys.2011.00010     Recent findings of atypical patterns in both functional and anatomical connectivity in autism have established that autism is a not a localised neurological disorder, but one that affects many parts of the brain in many types of thinking tasks. fMRI studies repeatedly find evidence of decreased coordination between frontal and posterior brain regions in autism, as measured by functional connectivity. Furthermore, neuroimaging studies have also shown evidence of an atypical pattern of frontal white matter development in autism. These findings indicate that limitations of brain connectivity give rise to the varied behavioural deficits found in autism. As research continues to explore these biological mechanisms, new intervention methods may be developed to help improve brain connectivity and overcome the behavioural impairments of autism.
  15. Noël C. Derecki, James C. Cronk, Zhenjie Lu, Eric Xu, Stephen B. G. Abbott, Patrice G. Guyenet & Jonathan Kipnis. Wild-type microglia arrests pathology in a mouse model of Rett syndrome. Nature 484, 105–109 (05 April 2012) doi:10.1038/nature10907
  16. Jieqi Wang, Jan-Eike Wegener, Teng-Wei Huang, Smitha Sripathy, Hector De Jesus-Cortes, Pin Xu, Stephanie Tran, Whitney Knobbe, Vid Leko, Jeremiah Britt, Ruth Starwalt, Latisha McDaniel, Chris S. Ward, Diana Parra, Benjamin Newcomb, Uyen Lao, Cynthia Nourigat, David A. Flowers, Sean Cullen, Nikolas L. Jorstad, Yue Yang, Lena Glaskova, Sébastien Vigneau, Julia Kozlitina, Michael J. Yetman et al. Wild-type microglia do not reverse pathology in mouse models of Rett syndrome. Nature 521, E1–E4 (21 May 2015) doi:10.1038/nature14444

Dengue Viral Infection : World Health Organisation (WHO) Guidelines

Dengue Virus Source
Dengue.jpg
A TEM micrograph showing Dengue virus virions (the cluster of dark dots near the center). Source

Brief Information for Public in General

Dengue is a mosquito-borne viral disease that has rapidly spread in all regions of WHO in recent years. Dengue virus is transmitted by female mosquitoes mainly of the species Aedes aegypti and, to a lesser extent, A. albopictus. The disease is widespread throughout the tropics, with local variations in risk influenced by rainfall, temperature and unplanned rapid urbanization.

Severe dengue (also known as Dengue Haemorrhagic Fever) was first recognized in the 1950s during dengue epidemics in the Philippines and Thailand. Today, severe dengue affects most Asian and Latin American countries and has become a leading cause of hospitalization and death among children in these regions.

There are 4 distinct, but closely related, serotypes of the virus that cause dengue (DEN-1, DEN-2, DEN-3 and DEN-4). Recovery from infection by one provides lifelong immunity against that particular serotype. However, cross-immunity to the other serotypes after recovery is only partial and temporary. Subsequent infections by other serotypes increase the risk of developing severe dengue.

Aedes aegypti; adult female mosquito taking a blood meal on human skin.

Dengue
Aedes aegypti

Transmission

The Aedes aegypti mosquito is the primary vector of dengue. The virus is transmitted to humans through the bites of infected female mosquitoes. After virus incubation for 4–10 days, an infected mosquito is capable of transmitting the virus for the rest of it’s life.

Infected humans are the main carriers and multipliers of the virus, serving as a source of the virus for uninfected mosquitoes. Patients who are already infected with the dengue virus can transmit the infection (for 4–5 days; maximum 12) via Aedes mosquitoes after their first symptoms appear.

The Aedes aegypti mosquito lives in urban habitats and breeds mostly in man-made containers. Unlike other mosquitoes Ae. aegypti is a day-time feeder; its peak biting periods are early in the morning and in the evening before dusk. Female Ae. aegypti bites multiple people during each feeding period.

Aedes albopictus, a secondary dengue vector in Asia, has spread to North America and Europe largely due to the international trade in used tyres (a breeding habitat) and other goods (e.g. lucky bamboo). Ae. albopictus is highly adaptive and, therefore, can survive in cooler temperate regions of Europe. It’s spread is due to its tolerance to temperatures below freezing, hibernation, and ability to shelter in micro-habitats.

Characteristics

Dengue fever is a severe, flu-like illness that affects infants, young children and adults, but seldom causes death.

Dengue should be suspected when a high fever (40°C/104°F) is accompanied by 2 of the following symptoms: severe headache, pain behind the eyes, muscle and joint pains, nausea, vomiting, swollen glands or rash. Symptoms usually last for 2–7 days, after an incubation period of 4–10 days after the bite from an infected mosquito.

Severe dengue is a potentially deadly complication due to plasma leaking, fluid accumulation, respiratory distress, severe bleeding, or organ impairment.

Warning signs occur 3–7 days after the first symptoms in conjunction with a decrease in temperature (below 38°C/100°F) and include: severe abdominal pain, persistent vomiting, rapid breathing, bleeding gums, fatigue, restlessness and blood in vomit. The next 24–48 hours of the critical stage can be lethal; proper medical care is needed to avoid complications and risk of death.

Treatment

There is no specific treatment for dengue fever.

For severe dengue, medical care by physicians and nurses experienced with the effects and progression of the disease can save lives – decreasing mortality rates from more than 20% to less than 1%. Maintenance of the patient’s body fluid volume is critical to severe dengue care.

Immunization

There is no vaccine to protect against dengue. However, major progress has been made in developing a vaccine against dengue/severe dengue. Three tetravalent live-attenuated vaccines are under development in phase II and phase III clinical trials, and 3 other vaccine candidates (based on subunit, DNA and purified inactivated virus platforms) are at earlier stages of clinical development. WHO provides technical advice and guidance to countries and private partners to support vaccine research and evaluation.

Prevention and control

Havana: A local health worker uses a torch to check for signs of water and mosquito eggs inside tyres in a tyre depot.

 At present, the only method to control or prevent the transmission of dengue virus is to combat vector mosquitoes through:
  • preventing mosquitoes from accessing egg-laying habitats by environmental management and modification;
  • disposing of solid waste properly and removing artificial man-made habitats;
  • covering, emptying and cleaning of domestic water storage containers on a weekly basis;
  • applying appropriate insecticides to water storage outdoor containers;
  • using of personal household protection such as window screens, long-sleeved clothes, insecticide treated materials, coils and vaporizers;
  • improving community participation and mobilization for sustained vector control;
  • applying insecticides as space spraying during outbreaks as one of the emergency vector-control measures;
  • active monitoring and surveillance of vectors should be carried out to determine effectiveness of control interventions.
Reference
  1. Bhatt S, Gething PW, Brady OJ, Messina JP, Farlow AW, Moyes CL et.al. The global distribution and burden of dengue. Nature;496:504-507.
  2. Brady OJ, Gething PW, Bhatt S, Messina JP, Brownstein JS, Hoen AG et al. Refining the global spatial limits of dengue virus transmission by evidence-based consensus. PLoS Negl Trop Dis. 2012;6:e1760. doi:10.1371/journal.pntd.0001760.

Dengue Viruses

Viruses are tiny agents that can infect a variety of living organisms, including bacteria, plants, and animals. Like other viruses, the dengue virus is a microscopic structure that can only replicate inside a host organism.

Discovery of the Dengue Viruses

 The dengue viruses are members of the genus Flavivirus in the family Flaviviridae. Along with the dengue virus, this genus also includes a number of other viruses transmitted by mosquitoes and ticks that are responsible for human diseases. Flavivirus includes the yellow fever, West Nile, Japanese encephalitis, and tick-borne encephalitis viruses.

In 1943, Ren Kimura and Susumu Hotta first isolated the dengue virus. These two scientists were studying blood samples of patients taken during the 1943 dengue epidemic in Nagasaki, Japan. A year later, Albert B. Sabin and Walter Schlesinger independently isolated the dengue virus. Both pairs of scientists had isolated the virus now referred to as dengue virus 1 (DEN-1).

Is DEN-1 the only type of dengue virus?

The Dengue Serotypes

 Dengue infections are caused by four closely related viruses named DEN-1, DEN-2, DEN-3, and DEN-4. These four viruses are called serotypes because each has different interactions with the antibodies in human blood serum. The four dengue viruses are similar — they share approximately 65% of their genomes — but even within a single serotype, there is some genetic variation. Despite these variations, infection with each of the dengue serotypes results in the same disease and range of clinical symptoms.

Are these four viruses all found in the same regions of the world?

In the 1970s, both DEN-1 and DEN-2 were found in Central America and Africa, and all four serotypes were present in Southeast Asia. By 2004, however, the geographical distribution of the four serotypes had spread widely. Now all four dengue serotypes circulate together in tropical and subtropical regions around the world. The four dengue serotypes share the same geographic and ecological niche.

Where did the dengue viruses first come from?

Scientists hypothesize that the dengue viruses evolved in non-human primates and jumped from these primates to humans in Africa or South-east Asia between 500 and 1,000 years ago.

After recovering from an infection with one dengue serotype, a person has immunity against that particular serotype.

Does infection with one serotype protect against future dengue infections with the other serotypes?

Individuals are  protected from infections with the remaining three serotypes for two to three months after the first dengue infection.

Unfortunately, it is not long-term protection.

After that short period, a person can be infected with any of the remaining three dengue serotypes. Researchers have noticed that subsequent infections can put individuals at a greater risk for severe dengue illnesses than those who have not been previously infected.

Dengue Virus Genome and Structure

 The dengue virus genome is a single strand of RNA. It is referred to as positive-sense RNA because it can be directly translated into proteins. The viral genome encodes ten genes. The genome is translated as a single, long polypeptide and then cut into ten proteins.
A diagram shows the dengue virus RNA genome with its structural and non-structural regions labeled. The RNA is depicted as a horizontal cylinder separated into several colored sections of varying sizes. A thin black coiled line representing untranslated RNA extends from the cylinder's lefthand terminus and is labeled the 5 prime UTR. From left to right, the genes encoded by the dengue virus genome are: the capsid, labeled C and colored light brown; the membrane, labeled M and colored orange; the envelope, labeled E and colored blue; and several non-structural genes, including NS1 (green), NS2A (red), NS2B (dark brown), NS3 (yellow), NS4A (dark orange), NS4B (teal), and NS5 (purple). NS5 is the longest gene; NS2A and NS2B have the shortest lengths. A region of untranslated RNA at the cylinder's righthand terminus is labeled the 3 prime UTR.
Dengue virus genome The dengue virus genome encodes three structural (capsid [C], membrane [M], and envelope [E]) and seven nonstructural (NS1, NS2A, NS2B, NS3, NS4A, NS4B, and NS5) proteins. © 2010 Nature Publishing Group Guzman, M. G. et al. Dengue: A continuing global threat. Nature Reviews Microbiology 8, S7–S16 (2010). doi:10.1038/nrmicro2460 All rights reserved.

What are the roles of these ten proteins?

Three are structural proteins: the capsid (C), envelope (E), and membrane (M) proteins. Seven are nonstructural proteins: NS1, NS2A, NS2B, NS3, NS4A, NS4B, and NS5. These nonstructural proteins play roles in viral replication and assembly.

The structure of the dengue virus is roughly spherical, with a diameter of approximately 50 nm (1 nm is one millionth of 1 mm) (Figure). The core of the virus is the nucleocapsid, a structure that is made of the viral genome along with C proteins. The nucleocapsid is surrounded by a membrane called the viral envelope, a lipid bilayer that is taken from the host. Embedded in the viral envelope are 180 copies of the E and M proteins that span through the lipid bilayer. These proteins form a protective outer layer that controls the entry of the virus into human cells.

A schematic of the dengue virus shows its primary structural features. The virus is depicted as an orange hexagon encapsulated inside a light brown circle. The hexagon is the nucleocapsid and the brown circle is the viral envelope. Thin red material coiled up inside the nucleocapsid represents the viral genome. Seven red lines radiate outward from the viral envelope in a symmetrical orientation. Each line has a green sphere at the end. These protrusions are E and M proteins.
Dengue virus structure The dengue virus has a roughly spherical shape. Inside the virus is the nucleocapsid, which is made of the viral genome and C proteins. The nucleocapsid is surrounded by a membrane called the viral envelope, a lipid bilayer that is taken from the host. Embedded in the viral envelope are E and M proteins that span through the lipid bilayer. These proteins form a protective outer layer that controls the entry of the virus into human cells. © 2011 Nature Education All rights reserved.

Dengue Virus Replication and Infectious Cycle

 How does the virus behave once it enters the human body?
The dengue viral replication process begins when the virus attaches to a human skin cell (Figure). After this attachment, the skin cell’s membrane folds around the virus and forms a pouch that seals around the virus particle. This pouch is called an endosome. A cell normally uses endosomes to take in large molecules and particles from outside the cell for nourishment. By hijacking this normal cell process, the dengue virus is able to enter a host cell.

Dengue virus replication The dengue virus attaches to the surface of a host cell and enters the cell by a process called endocytosis. Once deep inside the cell, the virus fuses with the endosomal membrane and is released into the cytoplasm. The virus particle comes apart, releasing the viral genome. The viral RNA (vRNA) is translated into a single polypeptide that is cut into ten proteins, and the viral genome is replicated. Virus assembly occurs on the surface of the endoplasmic reticulum (ER) when the structural proteins and newly synthesized RNA bud out from the ER. The immature viral particles are transported through the trans-Golgi network (TGN), where they mature and convert to their infectious form. The mature viruses are then released from the cell and can go on to infect other cells. © 2005 Nature Publishing Group Mukhopadhyay, S., Kuhn, R. J., & Rossmann M. G. A structural perspective of the flavivirus life cycle. Nature Reviews Microbiology 3, 13–22 (2005). doi:10.1038/nrmicro1067 All rights reserved.

Once the virus has entered a host cell, the virus penetrates deeper into the cell while still inside the endosome.

How does the virus exit the endosome, and why?

Researchers have learned that two conditions are needed for the dengue virus to exit the endosome:

  1. The endosome must be deep inside the cell where the environment is acidic.
  2. The endosomal membrane must gain a negative charge.

These two conditions allow the virus envelope to fuse with the endosomal membrane, and that process releases the dengue nucleo-capsid into the cytoplasm of the cell.

Once it is released into the cell cytoplasm, how does the virus replicate itself?

In the cytoplasm, the nucleocapsid opens to uncoat the viral genome. This process releases the viral RNA into the cytoplasm. The viral RNA then hijacks the host cell’s machinery to replicate itself. The virus uses ribosomes on the host’s rough endoplasmic reticulum (ER) to translate the viral RNA and produce the viral polypeptide. This polypeptide is then cut to form the ten dengue proteins.

The newly synthesized viral RNA is enclosed in the C proteins, forming a nucleo-capsid. The nucleo-capsid enters the rough ER and is enveloped in the ER membrane and surrounded by the M and E proteins. This step adds the viral envelope and protective outer layer. The immature viruses travel through the Golgi apparatus complex, where the viruses mature and convert into their infectious form. The mature dengue viruses are then released from the cell and can go on to infect other cells.

Summary

 The dengue virus is a tiny structure that can only replicate inside a host organism. The four closely related dengue viruses — DEN-1, DEN-2, DEN-3, and DEN-4 — are found in the same regions of the world. The dengue virus is a roughly spherical structure composed of the viral genome and capsid proteins surrounded by an envelope and a shell of proteins. After infecting a host cell, the dengue virus hijacks the host cell’s machinery to replicate the viral RNA genome and viral proteins. After maturing, the newly synthesized dengue viruses are released and go on to infect other host cells.
Source:  www.nature.com

Dengue case classification

Dengue has a wide spectrum of clinical presentations, often with unpredictable clinical evolution and outcome. While most patients recover following a self-limiting non-severe clinical course, a small proportion progress to severe disease, mostly characterized by plasma leakage with or without haemorrhage.

Intravenous rehydration is the therapy of choice; this intervention can reduce the case fatality rate to less than 1% of severe cases. The group progressing from non-severe to severe disease is difficult to define, but this is an important concern since appropriate treatment may prevent these patients from developing more severe clinical conditions.

Triage, appropriate treatment, and the decision as to where this treatment should be given (in a health care facility or at home) are influenced by the case classification for dengue. This is even more the case during the frequent dengue outbreaks worldwide, where health services need to be adapted to cope with the sudden surge in demand.

Symptomatic dengue virus infections were grouped into three categories:

undifferentiated fever,

dengue fever (DF) 

and

dengue haemorrhagic fever (DHF).

DHF was further classified into four severity grades, with grades III and IV being defined as dengue shock syndrome (DSS).

There have been many reports of difficulties in the use of this classification , which were summarized in a systematic literature review. Difficulties in applying the criteria for DHF in the clinical situation, together with the increase in clinically severe dengue cases which did not fulfil the strict criteria of DHF, led to the request for the classification to be reconsidered. Currently the classification into DF/DHF/DSS continues to be widely used.

A WHO/TDR-supported prospective clinical multi-centre study across dengue-endemic regions was set up to collect evidence about criteria for classifying dengue into levels of severity. The study findings confirmed that, by using a set of clinical and/or laboratory parameters, one sees a clear-cut difference between patients with severe dengue and those with non-severe dengue.

However, for practical reasons it was desirable to split the large group of patients with non-severe dengue into two subgroups — patients with warning signs and those without them. Criteria for diagnosing dengue (with or without warning signs) and severe dengue are presented in Figure. It must be kept in mind that even dengue patients without warning signs may develop severe dengue.

Expert consensus groups in Latin America (Havana, Cuba, 2007), South-East Asia (Kuala  Lumpur, Malaysia, 2007), and at WHO headquarters in Geneva, Switzerland in 2008 agreed that:

“dengue is one disease entity with different clinical presentations and often with unpredictable clinical evolution and outcome”;

the classification into levels of severity has a high potential for being of practical use in the clinicians’ decision as to where and how intensively the patient should be observed and treated (i.e. triage, which is particularly useful in outbreaks), in more consistent reporting in the national and international surveillance system, and as an end-point measure in dengue vaccine and drug trials.

Suggested Dengue Case Classification And Levels of Severity

This model for classifying dengue has been suggested by an expert group (Geneva, Switzerland, 2008) and is currently being tested in 18 countries by comparing it’s performance in practical settings to the existing WHO case classification.

Dengue slide

TRANSMISSION

The host

Dengue hmg fdengue hmg fev

After an incubation period of 4–10 days, infection by any of the four virus serotypes can produce a wide spectrum of illness, although most infections are asymptomatic or sub-clinical. Primary infection is thought to induce lifelong protective immunity to the infecting serotype [Halstead SB. Etiologies of the experimental dengues of Siler and Simmons. American Journal of Tropical Medicine and Hygiene, 1974, 23:974–982.]

Individuals suffering an infection are protected from clinical illness with a different serotype within 2–3 months of the primary infection but with no long-term cross-protective immunity.

Individual risk factors determine the severity of disease and include secondary infection, age, ethnicity and possibly chronic diseases (bronchial asthma, sickle cell anaemia and diabetes mellitus). Young children in particular may be less able than adults to compensate for capillary leakage and are consequently at greater risk of dengue shock.

Sero-epidemiological studies in Cuba and Thailand consistently support the role of secondary heterotypic infection as a risk factor for severe dengue, although there are a few reports of severe cases associated with primary infection.

[Halstead SB, Nimmannitya S, Cohen SN. Observations related to pathogenesis of dengue hemorrhagic fever. IV. Relation of disease severity to antibody response and virus recovered. Yale Journal of Biology and Medicine, 1970, 42:311–328.]

[Sangkawibha N et al. Risk factors in dengue shock syndrome: a prospective epidemiologic study in Rayong, Thailand. I. The 1980 outbreak. American Journal of Epidemiology, 1984;120:653–669.]

[Guzman MG et al. Epidemiologic studies on dengue in Santiago de Cuba, 1997. American Journal of Epidemiology, 2000, 152(9):793–799.]

[Halstead SB. Pathophysiology and pathogenesis of dengue haemorrhagic fever. In: Thongchareon P, ed. Monograph on dengue/dengue haemorrhagic fever. New Delhi, World Health Organization, Regional Office for South-East Asia, 1993 (pp 80–103).]

The time interval between infections and the particular viral sequence of infections may also be of importance.

For instance, a higher case fatality rate was observed in Cuba when DEN-2 infection followed a DEN-1 infection after an interval of 20 years compared to an interval of four years.

Severe dengue is also regularly observed during primary infection of infants born to dengue-immune mothers.

Antibody-dependent enhancement (ADE) of infection has been hypothesized as a mechanism to explain severe dengue in the course of a secondary infection and in infants with primary infections.

[Halstead SB. Antibody, macrophages, dengue virus infection, shock, and hemorrhage: a pathogenetic cascade. Reviews of Infectious Diseases, 1989, 11(Suppl4):S830–S839][Halstead SB, Heinz FX. Dengue virus: molecular basis of cell entry and pathogenesis, 25-27 June 2003, Vienna, Austria. Vaccine, 2005, 23(7):849–856]

In this model, non-neutralizing, cross-reactive antibodies raised during a primary infection, or acquired passively at birth, bind to epitopes on the surface of a heterologous infecting virus and facilitate virus entry into Fc-receptor-bearing cells. The increased number of infected cells is predicted to result in a higher viral burden and induction of a robust host immune response that includes inflammatory cytokines and mediators, some of which may contribute to capillary leakage. During a secondary infection, cross-reactive memory T cells are also rapidly activated, proliferate, express cytokines and die by apoptosis in a manner that generally correlates with overall disease severity. Host genetic determinants might influence the clinical outcome of infection [Kouri GP, Guzman MG. Dengue haemorrhagic fever/dengue shock syndrome: lessons from the Cuban epidemic, 1981. Bulletin of the World Health Organization, 1989, 67(4):375–380.][Sierra B, Kouri G, Guzman MG. Race: a risk factor for dengue hemorrhagic fever. Archives of Virology, 2007, 152(3):533–542.], though most studies have been unable to adequately address this issue. Studies in the American region show the rates of severe dengue to be lower in individuals of African ancestry than those in other ethnic groups.

The dengue virus enters via the skin while an infected mosquito is taking a blood-meal.

During the acute phase of illness the virus is present in the blood and its clearance from this compartment generally coincides with defervescence. Humoral and cellular immune responses are considered to contribute to virus clearance via the generation of neutralizing antibodies and the activation of CD4+ and CD8+ T lymphocytes. In addition, innate host defence may limit infection by the virus. After infection, serotype-specific and cross-reactive antibodies and CD4+ and CD8+ T cells remain measurable for years.

Plasma leakage, haemoconcentration and abnormalities in homeostasis characterize severe dengue. The mechanisms leading to severe illness are not well defined but the immune response, the genetic background of the individual and the virus characteristics may all contribute to severe dengue.

Recent data suggest that endothelial cell activation could mediate plasma leakage.

[Avirutnan P et al. Dengue virus infection of human endothelial cells leads to chemokine production, complement activation, and apoptosis. Journal of Immunology, 1998, 161:6338–6346]

[Cardier JE et al. Proinflammatory factors present in sera from patients with acute dengue infection induce activation and apoptosis of human microvascular endothelial cells: possible role of TNF-alpha in endothelial cell damage in dengue. Cytokine,
2005, 30(6):359–365.]

Plasma leakage is thought to be associated with functional rather than destructive effects on endothelial cells. Activation of infected monocytes and T cells, the complement system and the production of mediators, monokines, cytokines and soluble receptors may also be involved in endothelial cell dysfunction.

Thrombocytopenia may be associated with alterations in megakaryocytopoieses by the infection of human haematopoietic cells and impaired progenitor cell growth, resulting in platelet dysfunction (platelet activation and aggregation), increased destruction or consumption (peripheral sequestration and consumption).

Haemorrhage may be a consequence of the thrombocytopenia and associated platelet dysfunction or disseminated intravascular coagulation. In summary, a transient and reversible imbalance of inflammatory mediators, cytokines and chemokines occurs during severe dengue, probably driven by a high early viral burden, and leading to dysfunction of vascular endothelial cells, derangement of the haemocoagulation system then to plasma leakage, shock and bleeding.

Transmission of the dengue virus

Humans are the main amplifying host of the virus. Dengue virus circulating in the blood of viraemic humans is ingested by female mosquitoes during feeding. The virus then infects the mosquito mid-gut and subsequently spreads systemically over a period of 8–12 days. After this extrinsic incubation period, the virus can be transmitted to other humans during subsequent probing or feeding. The extrinsic incubation period is influenced in part by environmental conditions, especially ambient temperature. Thereafter the mosquito remains infective for the rest of its life. Ae. aegypti is one of the most efficient vectors for arboviruses because it is highly anthropophilic, frequently bites several times before completing oogenesis, and thrives in close proximity to humans.

Vertical transmission (transovarial transmission) of dengue virus has been demonstrated in the laboratory but rarely in the field. The significance of vertical transmission for maintenance of the virus is not well understood. Sylvatic dengue strains in some parts of Africa and Asia may also lead to human infection, causing mild illness. Several factors can influence the dynamics of virus transmission — including environmental and climate factors, host pathogen interactions and population immunological factors. Climate directly influences the biology of the vectors and thereby their abundance and distribution; it is consequently an important determinant of vector-borne disease epidemics.

CLINICAL MANAGEMENT AND DELIVERY OF CLINICAL SERVICES

OVERVIEW

Dengue infection is a systemic and dynamic disease. It has a wide clinical spectrum that includes both severe and non-severe clinical manifestations [Rigau-Perez JG et al. Dengue and dengue haemorrhagic fever. Lancet, 1998, 352:971–977.] After the incubation period, the illness begins abruptly and is followed by the three phases — febrile, critical and recovery.

dengue hmg fe

For a disease that is complex in its manifestations, management is relatively simple, inexpensive and very effective in saving lives so long as correct and timely interventions are instituted. The key is early recognition and understanding of the clinical problems during the different phases of the disease, leading to a rational approach to case management and a good clinical outcome. An overview of good and bad clinical practices is given in Textbox.

Good clinical practice and bad clinical practice

Activities (triage and management decisions) at the primary and secondary care levels (where patients are first seen and evaluated) are critical in determining the clinical outcome of dengue. A well-managed front-line response not only reduces the number of unnecessary hospital admissions but also saves the lives of dengue patients. Early notification of dengue cases seen in primary and secondary care is crucial for identifying outbreaks and initiating an early response. Differential diagnosis needs to be considered.

The Course of Dengue Illness [Yip WCL. Dengue haemorrhagic fever: current approaches to management. Medical Progress, October 1980.]

Febrile phase

Patients typically develop high-grade fever suddenly. This acute febrile phase usually lasts 2–7 days and is often accompanied by facial flushing, skin erythema, generalized body-ache, myalgia, arthralgia and headache [Rigau-Perez JG et al. Dengue and dengue haemorrhagic fever. Lancet, 1998, 352:971–977.]

Some patients may have sore throat, injected pharynx and conjunctival injection. Anorexia, nausea and vomiting are common. It can be difficult to distinguish dengue clinically from non-dengue febrile diseases in the early febrile phase. A positive tourniquet test in this phase increases the probability of dengue.

[Kalayanarooj S et al. Early clinical and laboratory indicators of acute dengue illness. Journal of Infectious Diseases, 1997, 176:313–321.]

[Phuong CXT et al. Evaluation of the World Health Organization standard tourniquet
test in the diagnosis of dengue infection in Vietnam. Tropical Medicine and International Health, 2002, 7:125–132.]

dengue pic           dengue picss IJD-55-79-g002               IJD-55-79-g003

In addition, these clinical features are indistinguishable between severe and non-severe dengue cases. Therefore monitoring for warning signs and other clinical parameters  is crucial to recognizing progression to the critical phase. Mild haemorrhagic manifestations like petechiae and mucosal membrane bleeding (e.g. nose and gums) may be seen [Kalayanarooj S et al. Early clinical and laboratory indicators of acute dengue illness. Journal of Infectious Diseases, 1997, 176:313–321.][Balmaseda A et al. Assessment of the World Health Organization scheme for classification of dengue severity in Nicaragua. American Journal of Tropical Medicine and Hygiene, 2005, 73:1059–1062.]

Massive vaginal bleeding (in women of childbearing age) and gastrointestinal bleeding may occur during this phase but is not common [Balmaseda A et al. Assessment of the World Health Organization scheme for classification of dengue severity in Nicaragua. American Journal of Tropical Medicine and Hygiene, 2005, 73:1059–1062.]

The liver is often enlarged and tender after a few days of fever [Kalayanarooj S et al. Early clinical and laboratory indicators of acute dengue illness. Journal of Infectious Diseases, 1997, 176:313–321.]

The earliest abnormality in the full blood count is a progressive decrease in total white cell count, which should alert the physician to a high probability of dengue.

Dengue slide 2

Critical phase

1 DHF complication and unusual manifestations

Around the time of defervescence, when the temperature drops to 37.5–38 degree C or less and remains below this level, usually on days 3–7 of illness, an increase in capillary permeability in parallel with increasing haematocrit levels may occur

[Srikiatkhachorn A et al. Natural history of plasma leakage in dengue hemorrhagic fever: a serial ultrasonic study. Pediatric Infectious Disease Journal, 2007, 26(4):283– 290.]

[Nimmannitya S et al. Dengue and chikungunya virus infection in man in Thailand, 1962–64. Observations on hospitalized patients with haemorrhagic fever. American Journal of Tropical Medicine and Hygiene, 1969, 18(6):954–971.]

This marks the beginning of the critical phase. The period of clinically significant plasma leakage usually lasts 24–48 hours.

[Kalayanarooj S et al. Early clinical and laboratory indicators of acute dengue illness. Journal of Infectious Diseases, 1997, 176:313–321.]

Progressive leukopenia followed by a rapid decrease in platelet count usually precedes plasma leakage. At this point patients without an increase in capillary permeability will improve, while those with increased capillary permeability may become worse as a result of lost plasma volume. The degree of plasma leakage varies. Pleural effusion and ascites may be clinically detectable depending on the degree of plasma leakage and the volume of fluid therapy. Hence chest x ray and abdominal ultrasound can be useful tools for diagnosis. The degree of increase above the baseline haematocrit often reflects the severity of plasma leakage.

Shock occurs when a critical volume of plasma is lost through leakage. It is often
preceded by warning signs. The body temperature may be subnormal when shock occurs. With prolonged shock, the consequent organ hypoperfusion results in progressive organ impairment, metabolic acidosis and disseminated intravascular coagulation. This in turn leads to severe haemorrhage causing the haematocrit to decrease in severe shock. Instead of the leukopenia usually seen during this phase of dengue, the total white cell count may increase in patients with severe bleeding. In addition, severe organ impairment such as severe hepatitis, encephalitis or myocarditis and/or severe bleeding may also develop without obvious plasma leakage or shock.

[Martinez-Torres E, Polanco-Anaya AC, Pleites-Sandoval EB. Why and how children with dengue die? Revista cubana de medicina tropical, 2008, 60(1):40–47.]

Those who improve after defervescence are said to have non-severe dengue. Some patients progress to the critical phase of plasma leakage without defervescence and, in these patients, changes in the full blood count should be used to guide the onset of the critical phase and plasma leakage.

Those who deteriorate will manifest with warning signs. This is called dengue with warning signs. Cases of dengue with warning signs will probably recover with early intravenous rehydration. Some cases will deteriorate to severe dengue.

Recovery phase

If the patient survives the 24–48 hour critical phase, a gradual reabsorption of extravascular compartment fluid takes place in the following 48–72 hours. General well-being improves, appetite returns, gastrointestinal symptoms abate, haemodynamic status stabilizes and diuresis ensues. Some patients may have a rash of “isles of white in the sea of red”

[Nimmannitya S. Clinical spectrum and management of dengue haemorrhagic fever. Southeast Asian Journal of Tropical Medicine and Public Health, 1987, 18(3):392–397.]

Some may experience generalized pruritus. Bradycardia and electrocardiographic changes are common during this stage. The haematocrit stabilizes or may be lower due to the dilutional effect of reabsorbed fluid. White blood cell count usually starts to rise soon after defervescence but the recovery of platelet count is typically later than that of white blood cell count.

Respiratory distress from massive pleural effusion and ascites will occur at any time if excessive intravenous fluids have been administered. During the critical and/or recovery phases, excessive fluid therapy is associated with pulmonary oedema or congestive heart failure.

Febrile, Critical and Recovery Phases in Dengue

  2 DHF compli
Dengue fev case def

dengue fever case defi     Grading severity 
GuidUntitled

Dengue slide 4

Severe dengue

Severe dengue is defined by one or more of the following:

(i) plasma leakage that may lead to shock (dengue shock) and/or fluid accumulation, with or without respiratory distress, and/or

(ii) severe bleeding, and/or

(iii) severe organ impairment.

Spectrum of dengue hmg fever

As dengue vascular permeability progresses, hypovolaemia worsens and results in shock. It usually takes place around defervescence, usually on day 4 or 5 (range days 3–7) of illness, preceded by the warning signs. During the initial stage of shock, the compensatory mechanism which maintains a normal systolic blood pressure also produces tachycardia and peripheral vasoconstriction with reduced skin perfusionresulting in cold extremities and delayed capillary refill time. Uniquely, the diastolic pressure rises towards the systolic pressure and the pulse pressure narrows as the peripheral vascular resistance increases. Patients in dengue shock often remain conscious and lucid. The inexperienced physician may measure a normal systolic pressure and misjudge the critical state of the patient.

dss

Finally, there is decompensation and both pressures disappear abruptly. Prolonged hypotensive shock and hypoxia may lead to multi-organ failure and an extremely difficult clinical course.

The patient is considered to have shock if the pulse pressure (i.e. the difference between the systolic and diastolic pressures) is ≤ 20 mm Hg in children or he/she has signs of poor capillary perfusion (cold extremities, delayed capillary refill, or rapid pulse rate).

In adults, the pulse pressure of ≤ 20 mm Hg may indicate a more severe shock.

Hypotension is usually associated with prolonged shock which is often complicated by major bleeding. Patients with severe dengue may have coagulation abnormalities, but these are usually not sufficient to cause major bleeding. When major bleeding does occur, it is almost always associated with profound shock since this, in combination with thrombocytopaenia, hypoxia and acidosis, can lead to multiple organ failure and advanced disseminated intravascular coagulation. Massive bleeding may occur without prolonged shock in instances when acetylsalicylic acid (aspirin), ibuprofen or corticosteroids have been taken. Unusual manifestations, including acute liver failure and encephalopathy, may be present, even in the absence of severe plasma leakage or shock.

criteria dd

Cardiomyopathy and encephalitis are also reported in a few dengue cases. However, most deaths from dengue occur in patients with profound shock, particularly if the situation is complicated by fluid overload. Severe dengue should be considered if the patient is from an area of dengue risk presenting with fever of 2–7 days plus any of the following features:

  •  There is evidence of plasma leakage, such as:
    – high or progressively rising haematocrit;
    – pleural effusions or ascites;
    – circulatory compromise or shock (tachycardia, cold and clammy extremities, capillary refill time greater than three seconds, weak or undetectable pulse, narrow pulse pressure or, in late shock, unrecordable blood pressure).
  • There is significant bleeding.
  • There is an altered level of consciousness (lethargy or restlessness, coma, convulsions).
  • There is severe gastrointestinal involvement (persistent vomiting, increasing or intense abdominal pain, jaundice).
  • There is severe organ impairment (acute liver failure, acute renal failure, encephalopathy or encephalitis, or other unusual manifestations, cardiomyopathy) or other unusual manifestations.

DHF lab

Delivery of clinical services and case management

Introduction

Reducing dengue mortality requires an organized process that guarantees early recognition of the disease, and its management and referral when necessary. The key component of the process is the delivery of good clinical services at all levels of health care, from primary to tertiary levels. Most dengue patients recover without requiring hospital admission while some may progress to severe disease. Simple but effective triage principles and management decisions applied at the primary and secondary care levels, where patients are first seen and evaluated, can help in identifying those at risk of developing severe disease and needing hospital care. This should be complemented by prompt and appropriate management of severe dengue in referral centres.

Activities at the first level of care should focus on:

  • recognizing that the febrile patient could have dengue;
  • notifying early to the public health authorities that the patient is a suspected case of dengue;
  • managing patients in the early febrile phase of dengue;
  • recognizing the early stage of plasma leakage or critical phase and initiating fluid therapy;
  • recognizing patients with warning signs who need to be referred for admission and/or intravenous fluid therapy to a secondary health care facility;
  • recognizing and managing severe plasma leakage and shock, severe bleeding and severe organ impairment promptly and adequately.

Primary and secondary health care centres

At primary and secondary levels, health care facilities are responsible for emergency/ambulatory triage assessment and treatment.

Triage is the process of rapidly screening patients soon after their arrival in the hospital or health facility in order to identify those with severe dengue (who require immediate emergency treatment to avert death), those with warning signs (who should be given priority while waiting in the queue so that they can be assessed and treated without delay), and non-urgent cases (who have neither severe dengue nor warning signs).

During the early febrile phase, it is often not possible to predict clinically whether a patient with dengue will progress to severe disease. Various forms of severe manifestations may unfold only as the disease progresses through the critical phase, but the warning signs are good indicators of a higher risk of developing severe dengue. Therefore, the patient should have daily outpatient health care assessments for disease progression with careful checking for manifestations of severe dengue and warning signs.

A Stepwise Approach to The Management of Dengue

Referral centres

Referral centres receiving severely ill dengue patients must be able to give prompt attention to referred cases. Beds should be made available to those patients who meet the admission criteria, even if elective cases have to be deferred. If possible, there should be a designated area to cohort dengue patients, and a high-dependency unit for closer monitoring of those with shock. These units should be staffed by doctors and nurses who are trained to recognize high-risk patients and to institute appropriate treatment and monitoring.

A number of criteria may be used to decide when to transfer a patient to a high dependency unit. These include:

  • early presentation with shock (on days 2 or 3 of illness);
  • severe plasma leakage and/or shock;
  • undetectable pulse and blood pressure;
  • severe bleeding;
  • fluid overload;
  • organ impairment (such as hepatic damage, cardiomyopathy, encephalopathy,
    encephalitis and other unusual complications).

Resources needed

In the detection and management of dengue, a range of resources is needed to deliver good clinical services at all levels.

Resources include

[Martinez E. A Organizacao de Assistencia Medica durante uma epidemia de FHD-SCD. In: Dengue. Rio de Janeiro, Editorial Fiocruz, 2005 (pp 222–229)]:

  • Human resources: The most important resource is trained doctors and nurses. Adequate health personnel should be allocated to the first level of care to help in triage and emergency management. If possible, dengue units staffed by experienced personnel could be set up at referral centres to receive referred cases, particularly during dengue outbreaks, when the number of personnel main need to be increased.
  • Special area: A well equipped and well staffed area should be designated for giving immediate and transitory medical care to patients who require intravenous fluid therapy until they can be transferred to a ward or referral health facility.
  • Laboratory resources: The most important laboratory investigation is that of serial haematocrit levels and full blood counts. These investigations should be easily accessible from the health centre. Results should be available within two hours in severe cases of dengue. If no proper laboratory services are available, the minimum standard is the point-of-care testing of haematocrit by capillary (finger prick) blood sample with the use of a micro-centrifuge.
  •  Consumables: Intravenous fluids such as crystalloids, colloids and intravenous giving sets should be available.
  • Drugs: There should be adequate stocks of antipyretics and oral rehydration salts. In severe cases, additional drugs are necessary (vitamin K1, Ca gluconate, NaHCO3, glucose, furosemide, KCl solution, vasopressor, and inotropes).
  • Communication: Facilities should be provided for easy communication, especially between secondary and tertiary levels of care and laboratories, including consultation by telephone.
  • Blood bank: Blood and blood products will be required by only a small percentage of patients but should be made readily available to those who need them.

Education and training

To ensure the presence of adequate staffing at all levels, the education and training of doctors, nurses, auxiliary health care workers and laboratory staff are priorities.

Educational programmes that are customized for different levels of health care and that reflect local capacity should be supported and implemented widely. The educational programmes should develop capacities for effective triage and should improve recognition, clinical management and laboratory diagnosis of dengue.

National committees should monitor and evaluate clinical management and outcomes.

Review committees at different levels (e.g. national, state, district, hospital) should review all dengue deaths, and, if possible, all cases of severe dengue, evaluate the health care delivery system, and provide feedback to doctors on how to improve care. In dengue-endemic countries, the knowledge of dengue, the vectors and transmission of disease should be incorporated into the school curriculum. The population should also be educated about dengue in order to empower patients and their families in their own care – so that they are prepared to seek medical care at the right time, avoid self-medication, identify skin bleedings, consider the day of defervescence (and during 48 hours) as the time when complications usually occur, and look for warning signs such as intense and continuous abdominal pain and frequent vomiting.

The mass media can give an important contribution if they are correctly briefed.

Workshops and other meetings with journalists, editors, artists and executives can contribute to drawing up the best strategy for health education and communication without alarming the public.

During dengue epidemics, nursing and medical students together with community activists can visit homes with the double purpose of providing health education and actively tracing dengue cases. This has been shown to be feasible, inexpensive and effective [Lemus ER, Estevez G, Velazquez JC. Campana por la Esparanza. La Lucha contra el Dengue (El Salvador, 2000). La Habana, Editors Politica, 2002.] and must be coordinated with the primary health care units. It is useful to have printed information about dengue illness and the warning signs for distribution to members of the community. Medical care providers must include health education activities such as disease prevention in their daily work.

Recommendations For Treatment

A stepwise approach to the management of dengue

Step I—Overall assessment

History

The history should include:

  • date of onset of fever/illness;
  • quantity of oral intake;
  • assessment for warning signs;
  • diarrhoea;
  • change in mental state/seizure/dizziness;
  • urine output (frequency, volume and time of last voiding);
  • other important relevant histories, such as family or neighbourhood dengue, travel to dengue endemic areas, co-existing conditions (e.g. infancy, pregnancy, obesity, diabetes mellitus, hypertension), jungle trekking and swimming in waterfall (consider leptospirosis, typhus, malaria), recent unprotected sex or drug abuse (consider acute HIV sero-conversion illness).

Physical examination

The physical examination should include:

  • assessment of mental state;
  • assessment of hydration status;
  • assessment of haemodynamic status;
  • checking for tachypnoea/acidotic breathing/pleural effusion;
  • checking for abdominal tenderness/hepatomegaly/ascites;
  • examination for rash and bleeding manifestations;
  • tourniquet test (repeat if previously negative or if there is no bleeding
    manifestation).

Investigation

A full blood count should be done at the first visit. A haematocrit test in the early febrile phase establishes the patient’s own baseline haematocrit. A decreasing white blood cell count makes dengue very likely. A rapid decrease in platelet count in parallel with a rising haematocrit compared to the baseline is suggestive of progress to the plasma leakage/critical phase of the disease. In the absence of the patient’s baseline, age specific population haematocrit levels could be used as a surrogate during the critical phase.

Laboratory tests should be performed to confirm the diagnosis. However, it is not necessary for the acute management of patients, except in cases with unusual manifestations.

Additional tests should be considered as indicated (and if available). These should include tests of liver function, glucose, serum electrolytes, urea and creatinine, bicarbonate or lactate, cardiac enzymes, ECG and urine specific gravity.

Step II—Diagnosis, assessment of disease phase and severity

On the basis of evaluations of the history, physical examination and/or full blood count and haematocrit, clinicians should be able to determine whether the disease is dengue, which phase it is in (febrile, critical or recovery), whether there are warning signs, the hydration and haemodynamic status of the patient, and whether the patient requires admission.

Step III—Management

Disease notification

In dengue-endemic countries, cases of suspected, probable and confirmed dengue should be notified as soon as possible so that appropriate public health measures can be initiated.

Laboratory confirmation is not necessary before notification, but should be obtained. In non-endemic countries, usually only confirmed cases will be notified.

Suggested criteria for early notification of suspected cases are that the patient lives in or has travelled to a dengue-endemic area, has fever for three days or more, has low or decreasing white cell counts, and/or has thrombocytopaenia ± positive tourniquet test.

In dengue-endemic countries, the later the notification, the more difficult it is to prevent dengue transmission.

Management decisions

Depending on the clinical manifestations and other circumstances, patients may [Martinez E. Preventing deaths from dengue: a space and challenge for primary health care. Pan American Journal of Public Health, 2006, 20:60–74.] be sent home (Group A), be referred for in-hospital management (Group B), or require emergency treatment and urgent referral (Group C).

Treatment according to groups A–C

Group A – patients who may be sent home

These are patients who are able to tolerate adequate volumes of oral fluids and pass urine at least once every six hours, and do not have any of the warning signs, particularly when fever subsides.

Ambulatory patients should be reviewed daily for disease progression (decreasing white blood cell count, defervescence and warning signs) until they are out of the critical period. Those with stable haematocrit can be sent home after being advised to return to the hospital immediately if they develop any of the warning signs and to adhere to the following action plan:

  • Encourage oral intake of oral rehydration solution (ORS), fruit juice and other fluids containing electrolytes and sugar to replace losses from fever and vomiting. Adequate oral fluid intake may be able to reduce the number of hospitalizations [Harris E et al. Fluid intake and decreased risk for hospitalization for dengue fever, Nicaragua. Emerging Infectious Diseases, 2003, 9:1003–1006.]. [Caution: fluids containing sugar/glucose may exacerbate hyperglycaemia of physiological stress from dengue and diabetes mellitus.]
  • Give paracetamol for high fever if the patient is uncomfortable. The interval of paracetamol dosing should not be less than six hours. Tepid sponge if the patient still has high fever. Do not give acetylsalicylic acid (aspirin), ibuprofen or other non-steroidal anti inflammatory agents (NSAIDs) as these drugs may aggravate gastritis or bleeding. Acetylsalicylic acid (aspirin) may be associated with Reye’s Syndrome.
  • Instruct the care-givers that the patient should be brought to hospital immediately if any of the following occur: no clinical improvement, deterioration around the time of defervescence, severe abdominal pain, persistent vomiting, cold and clammy extremities, lethargy or irritability/restlessness, bleeding (e.g. black stools or coffee-ground vomiting), not passing urine for more than 4–6 hours.

Patients who are sent home should be monitored daily by health care providers for temperature pattern, volume of fluid intake and losses, urine output (volume and frequency), warning signs, signs of plasma leakage and bleeding, haematocrit, and white blood cell and platelet counts (see group B).

Group B – patients who should be referred for in-hospital management

Patients may need to be admitted to a secondary health care centre for close observation, particularly as they approach the critical phase. These include patients with warning signs, those with co-existing conditions that may make dengue or its management more complicated (such as pregnancy, infancy, old age, obesity, diabetes mellitus, renal failure, chronic haemolytic diseases), and those with certain social circumstances (such as living alone, or living far from a health facility without reliable means of transport).

If the patient has dengue with warning signs, the action plan should be as follows:

Obtain a reference haematocrit before fluid therapy. Give only isotonic solutions such as 0.9% saline, Ringer’s lactate, or Hartmann’s solution. Start with 5–7 ml/kg/hour for 1–2 hours, then reduce to 3–5 ml/kg/hr for 2–4 hours, and then reduce to 2–3 ml/kg/hr or less according to the clinical response.

Reassess the clinical status and repeat the haematocrit. If the haematocrit remains the same or rises only minimally, continue with the same rate (2–3 ml/kg/hr) for another 2–4 hours. If the vital signs are worsening and haematocrit is rising rapidly, increase the rate to 5–10 ml/kg/hour for 1–2 hours. Reassess the clinical status, repeat the haematocrit and review fluid infusion rates accordingly.

Give the minimum intravenous fluid volume required to maintain good perfusion and urine output of about 0.5 ml/kg/hr. Intravenous fluids are usually needed for only 24–48 hours. Reduce intravenous fluids gradually when the rate of plasma leakage decreases towards the end of the critical phase. This is indicated by urine output and/or oral fluid intake that is/are adequate, or haematocrit decreasing below the baseline value in a stable patient.

Patients with warning signs should be monitored by health care providers until the period of risk is over. A detailed fluid balance should be maintained. Parameters that should be monitored include vital signs and peripheral perfusion (1–4 hourly until the patient is out of the critical phase), urine output (4–6 hourly), haematocrit (before and after fluid replacement, then 6–12 hourly), blood glucose, and other organ functions (such as renal profile, liver profile, coagulation profile, as indicated.

If the patient has dengue without warning signs, the action plan should be as follows:

Encourage oral fluids. If not tolerated, start intravenous fluid therapy of 0.9% saline or Ringer’s lactate with or without dextrose at maintenance rate. For obese and overweight patients, use the ideal body weight for calculation of fluid infusion. Patients may be able to take oral fluids after a few hours of intravenous fluid therapy. Thus, it is necessary to revise the fluid infusion frequently. Give the minimum volume required to maintain good perfusion and urine output. Intravenous fluids are usually needed only for 24–48 hours.

Patients should be monitored by health care providers for temperature pattern, volume of fluid intake and losses, urine output (volume and frequency), warning signs, haematocrit, and white blood cell and platelet counts. Other laboratory tests (such as liver and renal functions tests) can be done, depending on the clinical picture and the facilities of the hospital or health centre.

Group C – patients who require emergency treatment and urgent referral when
they have severe dengue

Patients require emergency treatment and urgent referral when they are in the critical phase of disease, i.e. when they have:

  • severe plasma leakage leading to dengue shock and/or fluid accumulation with respiratory distress;
  • severe haemorrhages;
  • severe organ impairment (hepatic damage, renal impairment, cardiomyopathy, encephalopathy or encephalitis).

All patients with severe dengue should be admitted to a hospital with access to intensive care facilities and blood transfusion. Judicious intravenous fluid resuscitation is the essential and usually sole intervention required. The crystalloid solution should be isotonic and the volume just sufficient to maintain an effective circulation during the period of plasma leakage. Plasma losses should be replaced immediately and rapidly with isotonic crystalloid solution or, in the case of hypotensive shock, colloid solutions. If possible, obtain haematocrit levels before and after fluid resuscitation.

There should be continued replacement of further plasma losses to maintain effective circulation for 24–48 hours. For overweight or obese patients, the ideal body weight should be used for calculating fluid infusion rates. A group and cross-match should be done for all shock patients. Blood transfusion should be given only in cases with suspected/severe bleeding.

Fluid resuscitation must be clearly separated from simple fluid administration. This is a strategy in which larger volumes of fluids (e.g. 10–20 ml boluses) are administered for a limited period of time under close monitoring to evaluate the patient’s response and to avoid the development of pulmonary oedema. The degree of intravascular volume deficit in dengue shock varies. Input is typically much greater than output, and the input/output ratio is of no utility for judging fluid resuscitation needs during this period.

The goals of fluid resuscitation include improving central and peripheral circulation (decreasing tachycardia, improving blood pressure, pulse volume, warm and pink extremities, and capillary refill time <2 seconds) and improving end-organ perfusion

– i.e. stable conscious level (more alert or less restless), urine output ≥ 0.5 ml/kg/hour, decreasing metabolic acidosis.

Treatment of shock

The action plan for treating patients with compensated shock is as follows.

  • Start intravenous fluid resuscitation with isotonic crystalloid solutions at 5–10 ml/kg/hour over one hour. Then reassess the patient’s condition (vital signs, capillary refill time, haematocrit, urine output). The next steps depend on the situation.
  • If the patient’s condition improves, intravenous fluids should be gradually reduced to 5–7 ml/kg/hr for 1–2 hours, then to 3–5 ml/kg/hr for 2–4 hours, then to 2–3 ml/kg/hr, and then further depending on haemodynamic status, which can be maintained for up to 24–48 hours.
  • If vital signs are still unstable (i.e. shock persists), check the haematocrit after the first bolus. If the haematocrit increases or is still high (>50%), repeat a second bolus of crystalloid solution at 10–20 ml/kg/hr for one hour. After this second bolus, if there is improvement, reduce the rate to 7–10 ml/kg/hr for 1–2 hours, and then continue to reduce as above. If haematocrit decreases compared to the initial reference haematocrit (<40% in children and adult females, <45% in adult males), this indicates bleeding and the need to cross-match and transfuse blood as soon as possible (see treatment for haemorrhagic complications).
  • Further boluses of crystalloid or colloidal solutions may need to be given during the next 24–48 hours.
Algorithm For Fluid Management in Compensated Shock.

Patients with hypotensive shock should be managed more vigorously. The action plan for treating patients with hypotensive shock is as follows:

  •  Initiate intravenous fluid resuscitation with crystalloid or colloid solution (if available) at 20 ml/kg as a bolus given over 15 minutes to bring the patient out of shock as quickly as possible.
  • If the patient’s condition improves, give a crystalloid/colloid infusion of 10 ml/kg/hr for one hour. Then continue with crystalloid infusion and gradually reduce to 5–7 ml/kg/hr for 1–2 hours, then to 3–5 ml/kg/hr for 2–4 hours, and then to 2–3 ml/kg/hr or less, which can be maintained for up to 24–48 hours.
  • If vital signs are still unstable (i.e. shock persists), review the haematocrit obtained before the first bolus. If the haematocrit was low (<40% in children and adult females, <45% in adult males), this indicates bleeding and the need to crossmatch and transfuse blood as soon as possible (see treatment for haemorrhagic complications).
  • If the haematocrit was high compared to the baseline value (if not available, use population baseline), change intravenous fluids to colloid solutions at 10–20ml/kg as a second bolus over 30 minutes to one hour. After the second bolus, reassess the patient. If the condition improves, reduce the rate to 7–10ml/kg/hr for 1–2 hours, then change back to crystalloid solution and reduce the rate of infusion as mentioned above. If the condition is still unstable, repeat the haematocrit after the second bolus.
  • If the haematocrit decreases compared to the previous value (<40% in children and adult females, <45% in adult males), this indicates bleeding and the need to cross-match and transfuse blood as soon as possible (see treatment for haemorrhagic complications). If the haematocrit increases compared to the previous value or remains very high (>50%), continue colloid solutions at 10–20 ml/kg as a third bolus over one hour. After this dose, reduce the rate to 7–10 ml/kg/hr for 1–2 hours, then change back to crystalloid solution and reduce the rate of infusion as mentioned above when the patient’s condition improves.
  • Further boluses of fluids may need to be given during the next 24 hours. The rate and volume of each bolus infusion should be titrated to the clinical response. Patients with severe dengue should be admitted to the high-dependency or intensive care area.

Patients with dengue shock should be frequently monitored until the danger period is over. A detailed fluid balance of all input and output should be maintained.

Parameters that should be monitored include vital signs and peripheral perfusion (every 15–30 minutes until the patient is out of shock, then 1–2 hourly). In general, the higher the fluid infusion rate, the more frequently the patient should be monitored and reviewed in order to avoid fluid overload while ensuring adequate volume replacement.

If resources are available, a patient with severe dengue should have an arterial line placed as soon as practical. The reason for this is that in shock states, estimation of blood pressure using a cuff is commonly inaccurate. The use of an indwelling arterial catheter allows for continuous and reproducible blood pressure measurements and frequent blood sampling on which decisions regarding therapy can be based. Monitoring of ECG and pulse oximetry should be available in the intensive care unit.

Urine output should be checked regularly (hourly till the patient is out of shock, then 1–2 hourly). A continuous bladder catheter enables close monitoring of urine output. An acceptable urine output would be about 0.5 ml/kg/hour. Haematocrit should be monitored (before and after fluid boluses until stable, then 4–6 hourly). In addition, there should be monitoring of arterial or venous blood gases, lactate, total carbon dioxide/bicarbonate (every 30 minutes to one hour until stable, then as indicated), blood glucose (before fluid resuscitation and repeat as indicated), and other organ functions (such as renal profile, liver profile, coagulation profile, before resuscitation and as indicated).

Changes in the haematocrit are a useful guide to treatment. However, changes must be interpreted in parallel with the haemodynamic status, the clinical response to fluid therapy and the acid-base balance. For instance, a rising or persistently high haematocrit together with unstable vital signs (particularly narrowing of the pulse pressure) indicates active plasma leakage and the need for a further bolus of fluid replacement. However, a rising or persistently high haematocrit together with stable haemodynamic status and adequate urine output does not require extra intravenous fluid. In the latter case, continue to monitor closely and it is likely that the haematocrit will start to fall within the next 24 hours as the plasma leakage stops.
A decrease in haematocrit together with unstable vital signs (particularly narrowing of the pulse pressure, tachycardia, metabolic acidosis, poor urine output) indicates major haemorrhage and the need for urgent blood transfusion. Yet a decrease in haematocrit together with stable haemodynamic status and adequate urine output indicates haemodilution and/or reabsorption of extravasated fluids, so in this case intravenous fluids must be discontinued immediately to avoid pulmonary oedema.
 
Treatment of haemorrhagic complications
Mucosal bleeding may occur in any patient with dengue but, if the patient remains stable with fluid resuscitation/replacement, it should be considered as minor. The bleeding usually improves rapidly during the recovery phase. In patients with profound thrombocytopaenia, ensure strict bed rest and protect from trauma to reduce the risk of bleeding. Do not give intramuscular injections to avoid haematoma. It should be noted that prophylactic platelet transfusions for severe thrombocytopaenia in otherwise haemodynamically stable patients have not been shown to be effective and are not necessary. [Lum L et al. Preventive transfusion in dengue shock syndrome – is it necessary? Journal of Pediatrics, 2003, 143:682–684.]
If major bleeding occurs it is usually from the gastrointestinal tract, and/or vagina in adult females. Internal bleeding may not become apparent for many hours until the first black stool is passed.
Patients at risk of major bleeding are those who:
  • have prolonged/refractory shock;
  • have hypotensive shock and renal or liver failure and/or severe and persistent metabolic acidosis;
  • are given non-steroidal anti-inflammatory agents;
  • have pre-existing peptic ulcer disease;
  • are on anticoagulant therapy;
  • have any form of trauma, including intramuscular injection.
 Patients with haemolytic conditions are at risk of acute haemolysis with haemoglobinuria
and will require blood transfusion.
Severe bleeding can be recognized by:
  • persistent and/or severe overt bleeding in the presence of unstable haemodynamic status, regardless of the haematocrit level;
  • a decrease in haematocrit after fluid resuscitation together with unstable haemodynamic status;
  • refractory shock that fails to respond to consecutive fluid resuscitation of 40-60 ml/kg;
  • hypotensive shock with low/normal haematocrit before fluid resuscitation;
  • persistent or worsening metabolic acidosis ± a well-maintained systolic blood pressure, especially in those with severe abdominal tenderness and distension.
Blood transfusion is life-saving and should be given as soon as severe bleeding is suspected or recognized. However, blood transfusion must be given with care because of the risk of fluid overload. Do not wait for the haematocrit to drop too low before deciding on blood transfusion. Note that haematocrit of <30% as a trigger for blood transfusion, as recommended in the Surviving Sepsis Campaign Guideline [Dellinger RP, Levy MM, Carlet JM. Surviving Sepsis Campaign: international guidelines for management of severe sepsis and septic shock: 2008. Critical Care Medicine, 2008, 36:296–327.], is not applicable to severe dengue. The reason for this is that, in dengue, bleeding usually occurs after a period of prolonged shock that is preceded by plasma leakage. During the plasma leakage the haematocrit increases to relatively high values before the onset of severe bleeding. When bleeding occurs, haematocrit will then drop from this high level. As a result, haematocrit levels may not be as low as in the absence of plasma leakage.
The action plan for the treatment of haemorrhagic complications is as follows:
  •  Give 5–10ml/kg of fresh-packed red cells or 10–20 ml/kg of fresh whole blood at an appropriate rate and observe the clinical response. It is important that fresh whole blood or fresh red cells are given. Oxygen delivery at tissue level is optimal with high levels of 2,3 di-phosphoglycerate (2,3 DPG). Stored blood loses 2,3 DPG, low levels of which impede the oxygen-releasing capacity of haemoglobin, resulting in functional tissue hypoxia. A good clinical response includes improving haemodynamic status and acid-base balance.
  • Consider repeating the blood transfusion if there is further blood loss or no appropriate rise in haematocrit after blood transfusion. There is little evidence to support the practice of transfusing platelet concentrates and/or fresh-frozen plasma for severe bleeding. It is being practised when massive bleeding can not be managed with just fresh whole blood/fresh-packed cells, but it may exacerbate the fluid overload.
  • Great care should be taken when inserting a naso-gastric tube which may cause severe haemorrhage and may block the airway. A lubricated oro-gastric tube may minimize the trauma during insertion. Insertion of central venous catheters should be done with ultra-sound guidance or by a very experienced person.
Treatment of complications and other areas of treatment
Fluid overload
Fluid overload with large pleural effusions and ascites is a common cause of acute respiratory distress and failure in severe dengue. Other causes of respiratory distress include acute pulmonary oedema, severe metabolic acidosis from severe shock, and Acute Respiratory Distress Syndrome (ARDS) (refer to standard textbook of clinical care for further guidance on management).
Causes of fluid overload are:
  • excessive and/or too rapid intravenous fluids.
  •  incorrect use of hypotonic rather than isotonic crystalloid solutions;
  • inappropriate use of large volumes of intravenous fluids in patients with unrecognized severe bleeding;
  • inappropriate transfusion of fresh-frozen plasma, platelet concentrates and   cryoprecipitates;
  • continuation of intravenous fluids after plasma leakage has resolved (24–48 hours from           defervescence);
  • co-morbid conditions such as congenital or ischaemic heart disease, chronic lung and renal     diseases.
Early clinical features of fluid overload are:
  • respiratory distress, difficulty in breathing.
  • rapid breathing.
  •  chest wall in-drawing;
  • wheezing (rather than crepitations);
  • large pleural effusions.
  • tense ascites.
  •  increased jugular venous pressure (JVP).
Late clinical features are:
  • pulmonary oedema (cough with pink or frothy sputum ± crepitations, cyanosis);
  • irreversible shock (heart failure, often in combination with ongoing hypovolaemia).
 Additional investigations are:
  • the chest x-ray which shows cardiomegaly, pleural effusion, upward
    displacement of the diaphragm by the ascites and varying degrees of
    “bat’s wings” appearance ± Kerley B lines suggestive of fluid overload and
    pulmonary oedema;
  • ECG to exclude ischaemic changes and arrhythmia;
  • arterial blood gases;
  • echocardiogram for assessment of left ventricular function, dimensions and regional wall dyskinesia that may suggest underlying ischaemic heart disease;
  • cardiac enzymes
The action plan for the treatment of fluid overload is as follows:
  • Oxygen therapy should be given immediately.
  • Stopping intravenous fluid therapy during the recovery phase will allow fluid in the pleural and peritoneal cavities to return to the intravascular compartment. This results in diuresis and resolution of pleural effusion and ascites. Recognizing when to decrease or stop intravenous fluids is key to preventing fluid overload. When the following signs are present, intravenous fluids should be discontinued or reduced to the minimum rate necessary to maintain euglycaemia:
  • signs of cessation of plasma leakage;
  • stable blood pressure, pulse and peripheral perfusion;
  • haematocrit decreases in the presence of a good pulse volume;
  • afebrile for more than 24–48 days (without the use of antipyretics);
  • resolving bowel/abdominal symptoms;
  • improving urine output.
  • The management of fluid overload varies according to the phase of the disease and the patient’s haemodynamic status. If the patient has stable haemodynamic status and is out of the critical phase (more than 24–48 hours of defervescence), stop intravenous fluids but continue close monitoring. If necessary, give oral or intravenous furosemide 0.1–0.5 mg/kg/dose once or twice daily, or a continuous infusion of furosemide 0.1 mg/kg/hour.
  • Monitor serum potassium and correct the ensuing hypokalaemia.
  • If the patient has stable haemodynamic status but is still within the critical phase, reduce the intravenous fluid accordingly. Avoid diuretics during the plasma leakage phase because they may lead to intravascular volume depletion.
  • Patients who remain in shock with low or normal haematocrit levels but show signs of fluid overload may have occult haemorrhage. Further infusion of large volumes of intravenous fluids will lead only to a poor outcome. Careful fresh whole blood transfusion should be initiated as soon as possible. If the patient remains in shock and the haematocrit is elevated, repeated small boluses of a colloid solution may help.
Other complications of dengue
Both hyperglycaemia and hypoglycaemia may occur, even in the absence of diabetes mellitus and/or hypoglycaemic agents. Electrolyte and acid-base imbalances are also common observations in severe dengue and are probably related to gastrointestinal losses through vomiting and diarrhoea or to the use of hypotonic solutions for resuscitation and correction of dehydration. Hyponatraemia, hypokalaemia, hyperkalaemia, serum calcium imbalances and metabolic acidosis (sodium bicarbonate for metabolic acidosis is not recommended for pH ≥ 7.15) can occur. One should also be alert for co-infections and nosocomial infections.
Supportive care and adjuvant therapy
Supportive care and adjuvant therapy may be necessary in severe dengue. This may
include:
  • renal replacement therapy, with a preference to continuous veno-venous haemodialysis (CVVH), since peritoneal dialysis has a risk of bleeding;
  • vasopressor and inotropic therapies as temporary measures to prevent lifethreatening hypotension in dengue shock and during induction for intubation, while correction of intravascular volume is being vigorously carried out;
  • further treatment of organ impairment, such as severe hepatic involvement or encephalopathy or encephalitis;
  • further treatment of cardiac abnormalities, such as conduction abnormalities, may occur (the latter usually not requiring interventions).
  • In this context there is little or no evidence in favour of the use of steroids and intravenous immunoglobulins, or of recombinant Activated Factor VII.
Refer to standard textbooks of clinical care for more detailed information regarding the treatment of complications and other areas of treatment.

LABORATORY DIAGNOSIS AND DIAGNOSTIC TESTS

 

OVERVIEW
Efficient and accurate diagnosis of dengue is of primary importance for clinical care (i.e. early detection of severe cases, case confirmation and differential diagnosis with other infectious diseases), surveillance activities, outbreak control, pathogenesis, academic research, vaccine development, and clinical trials.
Laboratory diagnosis methods for confirming dengue virus infection may involve detection of the virus, viral nucleic acid, antigens or antibodies, or a combination of these techniques. After the onset of illness, the virus can be detected in serum, plasma, circulating blood cells and other tissues for 4–5 days. During the early stages of the disease, virus isolation, nucleic acid or antigen detection can be used to diagnose the infection. At the end of the acute phase of infection, serology is the method of choice for diagnosis.
Antibody response to infection differs according to the immune status of the host [Vorndam V, Kuno G. Laboratory diagnosis of dengue virus infections. In: Gubler DJ, Kuno G, eds. Dengue and dengue hemorrhagic fever. New York, CAB International, 1997:313–333.]
When dengue infection occurs in persons who have not previously been infected with a flavivirus or immunized with a flavivirus vaccine (e.g. for yellow fever, Japanese encephalitis, tick-borne encephalitis), the patients develop a primary antibody response characterized by a slow increase of specific antibodies. IgM antibodies are the first immunoglobulin isotype to appear. These antibodies are detectable in 50% of patients by days 3-5 after onset of illness, increasing to 80% by day 5 and 99% by day 10. IgM levels peak about two weeks after the onset of symptoms and then decline generally to undetectable levels over 2–3 months. Anti-dengue serum IgG is generally detectable at low titres at the end of the first week of illness, increasing slowly thereafter, with serum IgG still detectable after several months, and probably even for life. [Innis B et al. An enzyme-linked immunosorbent assay to characterize dengue infections where dengue and Japanese encephalitis co-circulate. American Journal of Tropical Medicine and Hygiene, 1989, 40:418–427.]
[PAHO. Dengue and dengue hemorrhagic fever in the Americas: guidelines for
prevention and control. Washington, DC, Pan American Health Organization, 1994
(Scientific Publication No. 548).]
[WHO. Dengue haemorrhagic fever: diagnosis, treatment, prevention and control,
2nd ed. Geneva, World Health Organization, 1997.]
During a secondary dengue infection (a dengue infection in a host that has previously been infected by a dengue virus, or sometimes after non-dengue flavivirus vaccination or infection), antibody titres rise rapidly and react broadly against many flaviviruses. The dominant immunoglobulin isotype is IgG which is detectable at high levels, even in the acute phase, and persists for periods lasting from 10 months to life. Early convalescent stage IgM levels are significantly lower in secondary infections than in primary ones and may be undetectable in some cases, depending on the test used.[Chanama S et al. Analysis of specific IgM responses in secondary dengue virus infections: levels and positive rates in comparison with primary infections. Journal of Clinical Virology, 2004, 31:185–189.]
To distinguish primary and secondary dengue infections, IgM/IgG antibody ratios are now more commonly used than the haemagglutination-inhibition test (HI) [Kuno G, Gomez I, Gubler DJ. An ELISA procedure for the diagnosis of dengue infections. Journal of Virological Methods, 1991, 33:101–113.]
[Shu PY et al. Comparison of a capture immunoglobulin M (IgM) and IgG ELISA and non-structural protein NS1 serotype-specific IgG ELISA for differentiation of primary and secondary dengue virus infections. Clinical and Diagnostic Laboratory Immunology, 2003, 10:622–630.]
[Falconar AK, de Plata E, Romero-Vivas CM. Altered enzyme-linked immunosorbent assay immunoglobulin M (IgM)/IgG optical density ratios can correctly classify all primary or secondary dengue virus infections 1 day after the onset of symptoms, when all of the viruses can be isolated. Clinical and Vaccine Immunology, 2006, 13:1044–1051.]
A range of laboratory diagnostic methods has been developed to support patient management and disease control. The choice of diagnostic method depends on the purpose for which the testing is done (e.g. clinical diagnosis, epidemiological survey, vaccine development), the type of laboratory facilities and technical expertise available,costs, and the time of sample collection.
In general, tests with high sensitivity and specificity require more complex technologies and technical expertise, while rapid tests may compromise sensitivity and specificity for the ease of performance and speed. Virus isolation and nucleic acid detection are more labour intensive and costly but are also more specific than antibody detection using serologic methods. Figure shows a general inverse relationship between the ease of use or accessibility of a diagnostic method and the confidence in the results of the test.
CONSIDERATIONS IN THE CHOICE OF DIAGNOSTIC METHODS
Clinical Management
Dengue virus infection produces a broad spectrum of symptoms, many of which are non-specific. Thus, a diagnosis based only on clinical symptoms is unreliable. Early laboratory confirmation of clinical diagnosis may be valuable because some patients progress over a short period from mild to severe disease and sometimes to death. Early intervention may be life-saving.
Before day 5 of illness, during the febrile period, dengue infections may be diagnosed by virus isolation in cell culture, by detection of viral RNA by nucleic acid amplification tests (NAAT), or by detection of viral antigens by ELISA or rapid tests. Virus isolation in cell culture is usually performed only in laboratories with the necessary infrastructure and technical expertise. For virus culture, it is important to keep blood samples cooled or frozen to preserve the viability of the virus during transport from the patient to the laboratory. The isolation and identification of dengue viruses in cell cultures usually takes several days. Nucleic acid detection assays with excellent performance characteristics may identify dengue viral RNA within 24–48 hours. However, these tests require expensive equipment and reagents and, in order to avoid contamination, tests must observe quality control procedures and must be performed by experienced technicians.

NS1 antigen detection kits now becoming commercially available can be used in laboratories with limited equipment and yield results within a few hours. Rapid dengue antigen detection tests can be used in field settings and provide results in less than an hour. Currently, these assays are not type-specific, are expensive and are under evaluation for diagnostic accuracy and cost-effectiveness in multiple settings.

Summary of operating characteristics and comparative costs of dengue diagnostic methods. Pelegrino JL. Summary of dengue diagnostic methods. World Health Organization, Special Programme for Research and Training in Tropical Diseases, 2006 (unpublished report).

After day 5, dengue viruses and antigens disappear from the blood coincident with the appearance of specific antibodies. NS1 antigen may be detected in some patients for a few days after defervescence. Dengue serologic tests are more available in dengueendemic countries than are virological tests. Specimen transport is not a problem as immunoglobulins are stable at tropical room temperatures.

For serology, the time of specimen collection is more flexible than that for virus isolation or RNA detection because an antibody response can be measured by comparing a sample collected during the acute stage of illness with samples collected weeks or months later. Low levels of a detectable dengue IgM response – or the absence of it – in some secondary infections reduces the diagnostic accuracy of IgM ELISA tests. Results of rapid tests may be available within less than one hour. Reliance on rapid tests to diagnose dengue infections should be approached with caution, however, since the performance of all commercial tests has not yet been evaluated by reference laboratories.[Hunsperger EA et al. Evaluation of commercially available anti–dengue virus immunoglobulin M tests. Emerging Infectious Diseases (serial online), 2009, March (date cited). Accessible at http://www.cdc.gov/EID/content/15/3/436.htm]

A four-fold or greater increase in antibody levels measured by IgG ELISA or by haemagglutination inhibition (HI) test in paired sera indicates an acute or recent flavivirus infection. However, waiting for the convalescent serum collected at the time of patient discharge is not very useful for diagnosis and clinical management and provides only
a retrospective result.

Differential diagnosis

Dengue fever can easily be confused with non-dengue illnesses, particularly in nonepidemic
situations. Depending on the geographical origin of the patient, other etiologies – including non-dengue flavivirus infections – should be ruled out. These include yellow fever, Japanese encephalitis, St Louis encephalitis, Zika, and West Nile, alphaviruses (such as Sinbis and chikungunya), and other causes of fever such as malaria, leptospirosis, typhoid, Rickettsial diseases (Rickettsia prowazeki, R. mooseri, R. conori, R. rickettsi, Orientia tsutsugamushi, Coxiella burneti, etc.), measles, enteroviruses, influenza and influenza-like illnesses, haemorrhagic fevers (Arenaviridae: Junin, etc.; Filoviridae: Marburg, Ebola; Bunyaviridae: hantaviruses, Crimean-Congo haemorrhagic fever, etc.).

Both the identification of virus/viral RNA/viral antigen and the detection of an antibody
response are preferable for dengue diagnosis to either approach alone.

Interpretation of dengue diagnostic tests [adapted from Dengue and Control (DENCO) study]
Unfortunately, an ideal diagnostic test that permits early and rapid diagnosis, is affordable
for different health systems, is easy to perform, and has a robust performance, is not yet
available.
Outbreak investigations
During outbreaks some patients may be seen presenting with fever with or without rash
during the acute illness stage; some others may present with signs of plasma leakage or
shock, and others with signs of haemorrhages, while still others may be observed during
the convalescent phase.
One of the priorities in a suspected outbreak is to identify the causative agent so that
appropriate public health measures can be taken and physicians can be encouraged to
initiate appropriate acute illness management. In such cases, the rapidity and specificity
of diagnostic tests is more important than test sensitivity. Samples collected from febrile patients could be tested by nucleic acid methods in a well-equipped laboratory or a broader spectrum of laboratories using an ELISA-based dengue antigen detection kit. If specimens are collected after day 5 of illness, commercial IgM ELISA or sensitive dengue IgM rapid tests may suggest a dengue outbreak, but results are preferably confirmed with reliable serological tests performed in a reference laboratory with broad arbovirus diagnostic capability. Serological assays may be used to determine the extent of outbreaks.

Surveillance

Dengue surveillance systems aim to detect the circulation of specific viruses in the human
or mosquito populations. The diagnostic tools used should be sensitive, specific and
affordable for the country. Laboratories responsible for surveillance are usually national
and/or reference laboratories capable of performing diagnostic tests as described
above for dengue and for a broad range of other etiologies.

Vaccine trials

Vaccine trials are performed in order to measure vaccine safety and efficacy in vaccinated
persons. The plaque reduction and neutralization test (PRNT) and the microneutralization
assays are commonly used to measure protection correlates.
Following primary infections in non-flavivirus immunes, neutralizing antibodies as measured by PRNT may be relatively or completely specific to the infecting virus type.[Morens DM et al. Simplified plaque reduction neutralization assay for dengue viruses by semimicro methods in BHK-21 cells: comparison of the BHK suspension test with standard plaque reduction neutralization. Journal of Clinical Microbiology, 1985,22(2):250–254.]
[Alvarez M et al. Improved dengue virus plaque formation on BHK21 and LLCMK2
cells: evaluation of some factors. Dengue Bulletin, 2005, 29:1–9.]
This assay is the most reliable means of measuring the titre of neutralizing antibodies in the serum of an infected individual as a measure of the level of protection against an infecting virus. The assay is based on the principle that neutralizing antibodies inactivate the virus so that it is no longer able to infect and replicate in target cells.
After a second dengue virus infection, high-titre neutralizing antibodies are produced against at least two, and often all four, dengue viruses as well as against non-dengue flaviviruses. This cross reactivity results from memory B-cells which produce antibodies directed at virion epitopes shared by dengue viruses. During the early convalescent stage following sequential dengue infections, the highest neutralizing antibody titre is often directed against the first infecting virus and not the most recent one. This phenomenon is referred to as “original antigenic sin”.[Halstead SB, Rojanasuphot S, Sangkawibha N. Original antigenic sin in dengue. American Journal of Tropical Medicine and Hygiene, 1983, 32:154–156.]
The disadvantages of PRNT are that it is labour-intensive. A number of laboratories recently developed high through-put neutralization tests that can be used in large-scale surveillance studies and vaccine trials. Variable results have been observed in PRNTs performed in different laboratories. Variations can be minimized if tests are performed on standard cell lines using the same virus strains and the same temperature and time for incubation of virus with antibody. Input virus should be carefully calculated to avoid plaque overlap. Cell lines of mammalian origin, such as VERO cells, are recommended for the production of seed viruses for use in PRNT.
The microneutralization assay is based on the same principle as PRNT. Variable methods exist. In one, instead of counting the number of plaques per well, viral antigen is stained using a labelled antibody and the quantity of antigen measured colorimetrically. The test may measure nucleic acid using PCR. The microneutralization assay was designed to use smaller amounts of reagents and for testing larger numbers of samples. In viral antigen detection tests the spread of virus throughout the cells is not limited because, in PRNTs using
semisolid overlays, the time after infection must be standardized to avoid measuring growth after many cycles of replication. Since not all viruses grow at the same rate, the incubation periods are virus-specific. As with standard PRNTs, antibodies measured by micro methods from individuals with secondary infections may react broadly with all four dengue viruses.
Advantages and limitations of dengue diagnostic methods. Summary of dengue diagnostic methods. World Health Organization, Special Programme for Research and Training in Tropical Diseases, 2006 (unpublished report).

In drug trials, patients should have confirmed etiological diagnosis.

Current dengue diagnostic methods

Virus isolation

Specimens for virus isolation should be collected early in the course of the infection, during the period of viraemia (usually before day 5). Virus may be recovered from serum, plasma and peripheral blood mononuclear cells and attempts may be made from tissues collected at autopsy (e.g. liver, lung, lymph nodes, thymus, bone marrow).

Because dengue virus is heat-labile, specimens awaiting transport to the laboratory should be kept in a refrigerator or packed in wet ice. For storage up to 24 hours, specimens should be kept at between +4 °C and +8 °C. For longer storage, specimens should be frozen at -70 °C in a deep-freezer or stored in a liquid nitrogen container. Storage even for short periods at –20 °C is not recommended.

Cell culture is the most widely used method for dengue virus isolation. The mosquito cell line C6/36 (cloned from Ae. albopictus) or AP61 (cell line from Ae. pseudoscutellaris) are the host cells of choice for routine isolation of dengue virus. Since not all wild type dengue viruses induce a cytopathic effect in mosquito cell lines, cell cultures must be screened for specific evidence of infection by an antigen detection immunofluorescence assay using serotype-specific monoclonal antibodies and flavivirus group-reactive or dengue complex-reactive monoclonal antibodies. Several mammalian cell cultures, such as Vero, LLCMK2, and BHK21, may also be used but are less efficient.

Virus isolation followed by an immunofluorescence assay for confirmation generally requires 1–2 weeks and is possible only if the specimen is properly transported and stored to preserve the viability of the virus in it.

When no other methods are available, clinical specimens may also be inoculated by intracranial route in suckling mice or intrathoracic inoculation of mosquitoes. Newborn animals can develop encephalitis symptoms but with some dengue strains mice may exhibit no signs of illness. Virus antigen is detected in mouse brain or mosquito head squashes by staining with anti-dengue antibodies.

Nucleic acid detection

RNA is heat-labile and therefore specimens for nucleic acid detection must be handled
and stored according to the procedures described for virus isolation.

RT-PCR

Since the 1990s, several reverse transcriptase-polymerase chain reaction (RT-PCR) assays have been developed. They offer better sensitivity compared to virus isolation with a much more rapid turnaround time. In situ RT-PCR offers the ability to detect dengue RNA in paraffin-embedded tissues.

All nucleic acid detection assays involve three basic steps: nucleic acid extraction and purification, amplification of the nucleic acid, and detection and characterization of the amplified product. Extraction and purification of viral RNA from the specimen can be done by traditional liquid phase separation methods (e.g. phenol, chloroform) but has been gradually replaced by silica-based commercial kits (beads or columns) that are more reproducible and faster, especially since they can be automated using robotics systems. Many laboratories utilize a nested RT-PCR assay, using universal dengue primers targeting the C/prM region of the genome for an initial reverse transcription and amplification step, followed by a nested PCR amplification that is serotype-specific.[Lanciotti RS et al. Rapid detection and typing of dengue viruses from clinical samples by using reverse transcriptase-polymerase chain reaction. Journal of Clinical Microbiology, 1992, 30:545–551.]

A combination of the four serotype-specific oligonucleotide primers in a single reaction tube (one-step multiplex RT-PCR) is an interesting alternative to the nested RT-PCR.[Harris E et al. Typing of dengue viruses in clinical specimens and mosquitoes by single tube multiplex reverse transcriptase PCR. Journal of Clinical Microbiology, 1998,36:2634–2639.] The products of these reactions are separated by electrophoresis on an agarose gel, and the amplification products are visualized as bands of different molecular weights in the agarose gel using ethidium bromide dye, and compared with standard molecular weight markers. In this assay design, dengue serotypes are identified by the size of their bands.

Compared to virus isolation, the sensitivity of the RT-PCR methods varies from 80% to 100% and depends on the region of the genome targeted by the primers, the approach used to amplify or detect the PCR products (e.g. one-step RT-PCR versus two-step RTPCR), and the method employed for subtyping (e.g. nested PCR, blot hybridization with specific DNA probes, restriction site-specific PCR, sequence analysis, etc.). To avoid false positive results due to non-specific amplification, it is important to target regions of the genome that are specific to dengue and not conserved among flavi- or other related viruses. False-positive results may also occur as a result of contamination by amplicons from previous amplifications. This can be prevented by physical separation of different steps of the procedure and by adhering to stringent protocols for decontamination.

Real-time RT-PCR

The real-time RT-PCR assay is a one step assay system used to quantitate viral RNA and using primer pairs and probes that are specific to each dengue serotype. The use of a fluorescent probe enables the detection of the reaction products in real time, in a specialized PCR machine, without the need for electrophoresis. Many real-time RT-PCR assays have been developed employing TaqMan or SYBR Green technologies. The TaqMan real-time PCR is highly specific due to the sequence-specific hybridization of the probe. Nevertheless, primers and probes reported in publications may not be able to detect all dengue virus strains: the sensitivity of the primers and probes depends on their homology with the targeted gene sequence of the particular virus analyzed. The SYBR green real-time RT-PCR has the advantage of simplicity in primer design and uses universal RT-PCR protocols but is theoretically less specific.

Real-time RT-PCR assays are either “singleplex” (i.e. detecting only one serotype at a time) or “multiplex” (i.e. able to identify all four serotypes from a single sample). The multiplex assays have the advantage that a single reaction can determine all four serotypes without the potential for introduction of contamination during manipulation of the sample. However the multiplex real-time RT-PCR assays, although faster, are currently less sensitive than nested RT-PCR assays. An advantage of this method is the ability to determine viral titre in a clinical sample, which may be used to study the pathogenesis of dengue disease.[Vaughn DW et al. Dengue viremia titer, antibody response pattern and virus serotype correlate with disease severity. Journal of Infectious Diseases, 2000, 181:2–9.]

Isothermal amplification methods

The NASBA (nucleic acid sequence based amplification) assay is an isothermal RNA specific amplification assay that does not require thermal cycling instrumentation. The initial stage is a reverse transcription in which the single-stranded RNA target is copied into a double-stranded DNA molecule that serves as a template for RNA transcription. Detection of the amplified RNA is accomplished either by electrochemiluminescence or in real-time with fluorescent-labelled molecular beacon probes. NASBA has been adapted to dengue virus detection with sensitivity near that of virus isolation in cell cultures and may be a useful method for studying dengue infections in field studies.[Shu PY, Huang JH. Current advances in dengue diagnosis. Clinical and Diagnostic Laboratory Immunology, 2004, 11(4):642–650.]

Loop mediated amplification methods have also been described but their performance compared to other nucleic acid amplification methods are not known.[Parida MM et al. Rapid detection and differentiation of dengue virus serotypes by a real-time reverse transcription-loop-mediated isothermal amplification assay. Journal of Clinical Microbiology, 2005, 43:2895–2903 (doi: 10.1128/JCM.43.6.2895-2903.2005).]

Detection of antigens

Until recently, detection of dengue antigens in acute-phase serum was rare in patients with secondary infections because such patients had pre-existing virus-IgG antibody immunocomplexes. New developments in ELISA and dot blot assays directed to the envelop/membrane (E/M) antigen and the non-structural protein 1 (NS1) demonstrated that high concentrations of these antigens in the form of immune complexes could be detected in patients with both primary and secondary dengue infections up to nine days after the onset of illness.

The NS1 glycoprotein is produced by all flaviviruses and is secreted from mammalian cells. NS1 produces a very strong humoral response. Many studies have been directed at using the detection of NS1 to make an early diagnosis of dengue virus infection.

Commercial kits for the detection of NS1 antigen are now available, though they do not differentiate between dengue serotypes. Their performance and utility are currently being evaluated by laboratories worldwide, including the WHO/TDR/PDVI laboratory network. Fluorescent antibody, immunoperoxidase and avidin-biotin enzyme assays allow detection of dengue virus antigen in acetone-fixed leucocytes and in snap-frozen or formalin-fixed tissues collected at autopsy.

Serological tests

MAC-ELISA

For the IgM antibody-capture enzyme-linked immunosorbent assay (MAC-ELISA) total IgM in patients’ sera is captured by anti-μ chain specific antibodies (specific to human IgM) coated onto a microplate. Dengue-specific antigens, from one to four serotypes (DEN-1, -2, -3, and -4), are bound to the captured anti-dengue IgM antibodies and are detected by monoclonal or polyclonal dengue antibodies directly or indirectly conjugated with an enzyme that will transform a non-coloured substrate into coloured products. The optical density is measured by spectrophotometer.

Serum, blood on fi lter paper and saliva, but not urine, can be used for detection of IgM if samples are taken within the appropriate time frame (fi ve days or more after the onset of fever). Serum specimens may be tested at a single dilution or at multiple dilutions. Most of the antigens used for this assay are derived from the dengue virus envelope protein (usually virus-infected cell culture supernatants or suckling mouse brain preparations). MAC-ELISA has good sensitivity and specifi city but only when used fi ve or more days after the onset of fever. Different commercial kits (ELISA or rapid tests) are available but have variable sensitivity and specifi city. A WHO/TDR/PDVI laboratory network recently evaluated selected commercial ELISAs and fi rst-generation rapid diagnostic tests, finding that ELISAs generally performed better than rapid tests.

Cross-reactivity with other circulating fl aviviruses such as Japanese encephalitis, St Louis encephalitis and yellow fever, does not seem to be a problem but some false positives were obtained in sera from patients with malaria, leptospirosis and past dengue infection.[Hunsperger EA et al. Evaluation of commercially available anti–dengue virus immunoglobulin M tests. Emerging Infectious Diseases (serial online), 2009, March (date cited). Accessible at http://www.cdc.gov/EID/content/15/3/436.htm]

These limitations have to be taken into account when using the tests in regions where these pathogens co-circulate. It is recommended that tests be evaluated against a panel of sera from relevant diseases in a particular region before being released to the market. It is not possible to use IgM assays to identify dengue serotypes as these antibodies are broadly cross-reactive even following primary infections. Recently, some authors have described MAC-ELISA that could allow serotype determination but further evaluations are required.[Vazquez S et al. Serological markers during dengue 3 primary and secondary infections. Journal of Clinical Virology, 2005, 33(2):132–137.]

IgG ELISA

The IgG ELISA is used for the detection of recent or past dengue infections (if paired sera are collected within the correct time frame). This assay uses the same antigens as the MAC ELISA. The use of E/M-specific capture IgG ELISA (GAC) allows detection of IgG antibodies over a period of 10 months after the infection. IgG antibodies are lifelong as measured by E/M antigen-coated indirect IgG ELISA, but a fourfold or greater increase in IgG antibodies in acute and convalescent paired sera can be used to document recent infections. Test results correlate well with the haemagglutination-inhibition test.

An ELISA inhibition method (EIM) to detect IgG dengue antibodies is also used for the serological diagnosis and surveillance of dengue cases. This system is based in the competition for the antigen sites by IgG dengue antibodies in the sample and the conjugated human IgG anti-dengue.[Fernandez RJ, Vazquez S. Serological diagnosis of dengue by an ELISA inhibition method (EIM). Memórias do Instituto Oswaldo Cruz, 1990, 85(3):347-351.]

This method can be used to detect IgG antibodies in serum or plasma and filter-paper stored blood samples and permits identification of a case as a primary or secondary dengue infection.

[Fernandez RJ, Vazquez S. Serological diagnosis of dengue by an ELISA inhibition method (EIM). Memórias do Instituto Oswaldo Cruz, 1990, 85(3):347–351.]

[Vazquez S, Fernandez R, Llorente C. Usefulness of blood specimens on paper strips for serologic studies with inhibition ELISA. Revista do Instituto de Medicina Tropical de São Paulo, 1991, 33(4):309–311.]

[Vazquez S et al. Kinetics of antibodies in sera, saliva, and urine samples from adult patients with primary or secondary dengue 3 virus infections. International Journal of Infectious Diseases, 2007, 11:256–262.]

In general, IgG ELISA lacks specificity within the flavivirus serocomplex groups. Following viral infections, newly produced antibodies are less avid than antibodies produced months or years after infection. Antibody avidity is used in a few laboratories to discriminate primary and secondary dengue infections. Such tests are not in wide use and are not available commercially.

IgM/IgG ratio

A dengue virus E/M protein-specific IgM/IgG ratio can be used to distinguish primary from secondary dengue virus infections. IgM capture and IgG capture ELISAs are the most common assays for this purpose. In some laboratories, dengue infection is defined as primary if the IgM/IgG OD ratio is greater than 1.2 (using patient’s sera at 1/100 dilution) or 1.4 (using patient’s sera at 1/20 dilutions). The infection is secondary if the ratio is less than 1.2 or 1.4. This algorithm has also been adopted by some commercial vendors. However, ratios may vary between laboratories, thus indicating the need for better standardization of test performance.[Falconar AK, de Plata E, Romero-Vivas CM. Altered enzyme-linked immunosorbent assay immunoglobulin M (IgM)/IgG optical density ratios can correctly classify all primary or secondary dengue virus infections 1 day after the onset of symptoms, when all of the viruses can be isolated. Clinical and Vaccine Immunology, 2006, 13:1044 -1051.]

IgA

Positive detection for serum anti-dengue IgA as measured by anti-dengue virus IgA capture ELISA (AAC-ELISA) often occurs one day after that for IgM. The IgA titre peaks around day 8 after onset of fever and decreases rapidly until it is undetectable by day 40. No differences in IgA titres were found by authors between patients with primary or secondary infections. Even though IgA values are generally lower than IgM, both in serum and saliva, the two methods could be performed together to help in interpreting dengue serology.

[Vazquez S et al. Kinetics of antibodies in sera, saliva, and urine samples from adult patients with primary or secondary dengue 3 virus infections. International Journal of Infectious Diseases, 2007, 11:256–262.]

[Nawa M. Immunoglobulin A antibody responses in dengue patients: a useful
marker for serodiagnosis of dengue virus infection. Clinical and Vaccine Immunology, 2005, 12:1235–1237.]

This approach is not used very often and requires additional evaluation.

Haemagglutination-inhibition test

The haemagglutination-inhibition (HI) test is based on the ability of dengue antigens to agglutinate red blood cells (RBC) of ganders or trypsinized human O RBC. Anti-dengue antibodies in sera can inhibit this agglutination and the potency of this inhibition is measured in an HI test. Serum samples are treated with acetone or kaolin to remove non-specific inhibitors of haemagglutination, and then adsorbed with gander or trypsinized type O human RBC to remove non-specific agglutinins. Each batch the use of multiple different pH buffers for each serotype. Optimally the HI test requires paired sera obtained upon hospital admission (acute) and discharge (convalescent) or paired sera with an interval of more than seven days. The assay does not discriminate between infections by closely related fl aviviruses (e.g. between dengue virus and Japanese encephalitis virus or West Nile virus) nor between immunoglobulin isotypes.

The response to a primary infection is characterized by the low level of antibodies in the acute-phase serum drawn before day 5 and a slow elevation of HI antibody titres thereafter. During secondary dengue infections HI antibody titres rise rapidly, usually exceeding 1:1280. Values below this are generally observed in convalescent sera from patients with primary responses.

Haematological tests

Platelets and haematocrit values are commonly measured during the acute stages of dengue infection. These should be performed carefully using standardized protocols, reagents and equipment.

A drop of the platelet count below 100 000 per μL may be observed in dengue fever but it is a constant feature of dengue haemorrhagic fever. Thrombocytopaenia is usually observed in the period between day 3 and day 8 following the onset of illness.

Haemoconcentration, as estimated by an increase in haematocrit of 20% or more compared with convalescent values, is suggestive of hypovolaemia due to vascular permeability and plasma leakage.

Future test developments

Microsphere-based immunoassays (MIAs) are becoming increasingly popular as a serological option for the laboratory diagnosis of many diseases. This technology is based on the covalent bonding of antigen or antibody to microspheres or beads. Detection methods include lasers to elicit fluorescence of varying wavelengths. This technology is attractive as it is faster than the MAC-ELISA and has potential for multiplexing serological tests designed to identify antibody responses to several viruses. MIAs can also be used to detect viruses.

Rapid advances in biosensor technology using mass spectrometry have led to the development of powerful systems that can provide rapid discrimination of biological components in complex mixtures. The mass spectra that are produced can be considered a specific fingerprint or molecular profile of the bacteria or virus analysed. The software system built into the instrument identifies and quantifies the pathogen in a given sample by comparing the resulting mass spectra with those in a database of infectious agents,
and thus allows the rapid identification of many thousands of types of bacteria and viruses. Additionally, these tools can recognize a previously unidentified organism in the sample and describe how it is related to those encountered previously. This could be useful in determining not only dengue serotypes but also dengue genotypes during an outbreak. Identification kits for infectious agents are available in 96-well format and can be designed to meet specific requirements. Samples are processed for DNA extraction, PCR amplification, mass spectrometry and computer analysis.

Microarray technology makes it possible to screen a sample for many different nucleic acid fragments corresponding to different viruses in parallel. The genetic material must be amplified before hybridization to the microarray, and amplification strategy can target conserved sequences as well as random-based ones. Short oligonucleotides attached on the microarray slide give a relatively exact sequence identification, while longer DNA fragments give a higher tolerance for mismatches and thus an improved ability to detect diverged strains. A laser-based scanner is commonly used as a reader to detect amplified fragments labelled with fluorescent dyes. Microarray could be a useful technology to test, at the same time, dengue virus and other arboviruses circulating in the region and all the pathogens responsible for dengue-like symptoms.

Other approaches have been tested but are still in the early stages of development and evaluation. For instance, the luminescence-based techniques are becoming increasingly popular owing to their high sensitivity, low background, wide dynamic range and relatively inexpensive instrumentation.

QUALITY ASSURANCE

Many laboratories use in-house assays. The main weakness of these assays is the lack of standardization of protocols, so results cannot be compared or analysed in aggregate. It is important for national or reference centres to organize quality assurance programmes to ensure the profi ciency of laboratory staff in performing the assays and to produce reference materials for quality control of test kits and assays.

For nucleic acid amplifi cation assays, precautions need to be established to prevent contamination of patient materials. Controls and profi ciency-testing are necessary to ensure a high degree of confidence.[Lemmer K et al. External quality control assessments in PCR diagnostics of dengue virus infections. Journal of Clinical Virology, 2004, 30:291–296.]

BIOSAFETY ISSUES

The collection and processing of blood and other specimens place health care workers at risk of exposure to potentially infectious material. To minimize the risk of infection, safe laboratory techniques (i.e. use of personal protective equipment, appropriate containers for collecting and transporting samples, etc.) must be practised as described in WHO’s Laboratory biosafety manual.[WHO. Laboratory biosafety manual, 3rd ed. Geneva, World Health Organization, 2004 (ISBN 92 4 154650 6, WHO/CDS/CSR/LYO/2004.11, http://www.who.int/csr/resources/publications/biosafety/Biosafety7.pdf).]

ORGANIZATION OF LABORATORY SERVICES

In a disease-endemic country, it is important to organize laboratory services in the context of patients’ needs and disease control strategies. Appropriate resources should be allocated and training provided.

Proposed model for organisation of laboratory services.
Dengue laboratory diagnosis: examples of good and bad practice
 

Fluorescent Light-Emitting Diode (LED) Microscopy For Diagnosis of Tuberculosis (WHO)

This is an acid fast stain of Mycobacterium tuberculosis (MTB). Note the red rods–hence the terminology for MTB in histologic sections or smears: acid fast bacilli.

Image Source: http://library.med.utah.edu/WebPath/INFEHTML/INFEC033.html

A sputum smear containing Mycobacterium tuberculosis bacteria, as seen under the LED fluorescence microscope

LED fluorescence microscope

Source: http://www.finddiagnostics.org/programs/tb/find_activities/led_microscopy.html

Source: http://www.olympusmicro.com/primer/techniques/fluorescence/anatomy/fluoromicroanatomy.html

Mercury vapour lamp fluorescence microscopy

Direct sputum smear microscopy is the most widely used means for diagnosing pulmonary TB and is available in most primary health-care laboratories at health-centre level. Most laboratories use conventional light microscopy to examine Ziehl-Neelsen-stained direct smears; this has been shown to be highly specific in areas with a high prevalence of TB but with varying sensitivity (20–80%).

Fluorescence microscopy is more sensitive (10%) than conventional Ziehl-Neelsen microscopy, and examination of fluorochrome-stained smears takes less time.

Uptake of fluorescence microscopy has, however, been limited by its high cost, due to expensive mercury vapour light sources, the need for regular maintenance and the requirement for a dark room. 

LED microscopy was developed mainly to give resource-limited countries access to the benefits of fluorescence microscopy.

  1. First, existing fluorescence microscopes were converted to LED light sources. Considerable research and development subsequently resulted in inexpensive, robust LED microscopes or LED attachments for routine use in resource-limited settings.
  2. In comparison with conventional mercury vapour fluorescence microscopes, LED microscopes are less expensive, require less power and can run on batteries; furthermore, the bulbs have a long half-life and do not pose the risk of releasing potentially toxic products if broken, and LED microscopes are reported to perform equally well in a light room. These qualities make LED microscopy feasible for use in resource-limited settings, bringing the benefits of fluorescence microscopy (improved sensitivity and efficiency) where they are needed most.
  3. The LED microscope lamp is also inexpensive when compared to the mercury vapor or halogen lamp used in regular fluorescent microscopy and has a life span of more than 10,000-50,000 hours.

Accuracy of LED in comparison with reference standards

LED microscopy showed 84% sensitivity (95% confidence interval [CI], 76–89%) and 98% specificity (95% CI, 85–97%) against culture as the reference standard. When a microscopic reference standard was used, the overall sensitivity was 93% (95% CI, 85–97%), and the overall specificity was 99% (95% CI, 98–99%). A significant increase in sensitivity was reported when direct smears were used rather than concentrated smears (89% and 73%, respectively).

Accuracy of LED in comparison with Ziehl-Neelsen microscopy

LED microscopy was statistically significantly more sensitive by 6% (95% CI, 0.1–13%), with no appreciable loss in specificity, when compared with direct Ziehl-Neelsen microscopy.

Accuracy of LED in comparison with conventional fluorescence microscopy

LED microscopy was 5% (95% CI, 0–11%) more sensitive and 1% (95% CI, -0.7% – 3%) more specific than conventional fluorescence microscopy.

In qualitative assessments of user characteristics and outcomes in relation to implementation, such as time to reading, cost-effectiveness, training and smear fading, the main findings were:

  1. In comparison with Ziehl-Neelsen, LED showed similar gains in time for reading as conventional fluorescence microscopy, with about half the time for smear examination.
  2. Cost assessments predict better cost-effectiveness with LED than with Ziehl-Neelsen microscopy, with improved efficiency.
  3. Qualitative assessments of LED microscopy confirmed many anticipated advantages, including use of the devices without a dark room, durability and portability (in the case of attachment devices); user acceptability in all field studies was reported as excellent.
  4. LED may be useful for diagnosing other diseases, e.g. malaria and trypanosomiasis, reducing the costs involved in providing integrated laboratory services.

    A thin blood smear showing trypanosomes stained with Acridine Orange, as seen under the LED fluorescence microscope

  5. Possible barriers to widescale use of LED include training of laboratory staff unfamiliar with fluorescence microscopy and the fading of inherently unstable fluorochrome stains. Evidence from standardized training suggests that full proficiency in LED microscopy can be achieved within 1 month.
  6. Adequate evidence is available to recommend the use of auramine stains for LED microscopy. Other commercial and in-house fluorochrome stains are not recommended.
  7. Evidence on the effect of fading of fluorochrome stains on the reproducibility of smear results over time suggests that current external quality assurance programmes should be adapted.
  8. The introduction of LED might affect the cost of other diagnostic modalities, e.g. light microscopy for examining urine, stools and blood, which should be retained at peripheral health laboratory level.
  9.  No studies were found on the direct effect of LED microscopy on outcomes important to patients, such as cure and treatment completion.
  10. Further research is required on the outcomes of LED microscopy that are important to patients and on combinations of LED microscopy with novel approaches for early case detection or sputum processing.

Source: WHO Policy Statement http://apps.who.int/iris/bitstream/10665/44602/1/9789241501613_eng.pdf