FROM IONIZING RADIATION*

Designers of programs for informing the public about radiation hazards need to consider the difficulties inherent in communicating highly technical information about risk. To be effective, information campaigns must be buttressed by empirical research aimed at determining what people know, what they want to know, and how best to convey that information. Drawing upon studies of risk perception, this paper describes some of the problems that any information program must confront.


INTRODUCTION
ONE DRAMATIC change in people's outlook o n life in recent years is a growing awareness of the risks encountered in daily experience.
Radiation hazards, medicinal side-effects, occupational diseases, food contaminants, *This work was supported by the Technology Assessment and Risk Analysis Program of the National Science Foundation under Grant PRA 79-1 1934 to Clark University under subcontract to Perceptronics, Inc. Any opinions, findings, or conclusions expressed herein do not necessarily reflect the views of the National Science Foundation.
This article is a condensed version of a paper prepared for the public meeting "A Proposed Federal Radiation Research Agenda" held at the National Institutes of Health in March 1980. The meeting's sponsors, the Committee on Federal Research into the Biological Effects of Ionizing Radiation, ,asked the authors to address the following question: What kinds of information must be developed to foster enlightened public discussion on control of man-made ionizing radiation? How and by whom should such information be made accessible to the public? What end points or criteria, such as shortening of life, developing disease, etc. are useful to provide perspective for public understanding of radiation risks? aviation accidents, fires, etc. increasingly fill o u r newspapers and our thoughts. O n e consequence of this awareness is pressure o n the promoters and regulators of hazardous enterprises to inform people about the risks they face (see Fig. 1).
In May 1978, the White House directed the Secretary of Health, Education and Welfare to coordinate research on the health effects of radiation exposure, including the development of a public information program. The Interagency Task Force established to carry out this directive completed seven reports, including one on public information. The latter report focused on the need to provide information about radiation risks to the following audiences: medical and dental patients, workers exposed in their occupations, military personnel and civilians exposed to fallout from nuclear weapons testing, and the general public.
The task force recommended that messages developed for these audiences stress the following points: 0 Low-level background radiation is a part of the earth's natural environment. Any manmade radiation exposure adds to that already received from natural sources. 0 The degree of risk associated with exposure to low-level ionizing radiation is thought to be very low. 0 Scientists disagree about the precise magnitude of this risk. 0 Unnecessary radiation exposure should be avoided. 0 Any risk from radiation must be balanced against the benefits provided by the activity producing the radiation.
In addition, the public information report called for the development and presentation of information that describes the benefits and risks of radiation (and facilitates their comparison), outlines the scientific basis for risk estimates, and explains why such estimates are difficult to make for any given individual. Finally, the report called for a national survey of public attitudes and knowledge about radiation as an aid to designing public information materials.

CONFRONTING HUMAN LIMITATIONS
We strongly endorse the recommendation that programs be developed to inform patients, workers, and the general public. However, neither the report by the Interagency Task Force nor any other similar documents that we have seen adequately acknowledges the difficulties inherent in communicating highly technical information about risk.
Doing an adequate job means finding cogent ways of presenting complex, technical material that is clouded with uncertainty and may be distorted by the listeners' preconceptions (and perhaps misconceptions) about the hazard and its consequences. This section offers a brief overview of some of the problems facing any information program.

(a) It is hard to think clearly about risk
Decisions about risk from radiation (or any other source) require sophisticated reasoning on the part of both experts and the public. Needed are an appreciation of the probabilistic nature of the world and the ability to think intelligently about rare (but consequential) events. As Alvin Weinberg observed in the context of managing nuclear power, ". . . w e certainly accept on faith that our human intellect is capable of dealing with this new source of energy" (We76, p. 21). Unfortunately, although the human intellect is deservedly held in high esteem in many contexts, numerous studies have shown that intelligent people have difficulty judging probabilities, making predictions, or otherwise attempting to cope with uncertainty. Frequently, these difficulties can be traced to the use of judgmental heuristics, mental strategies whereby people try to reduce difficult tasks to simpler judgments. These heuristics are valid in some circumstances, but in others, they lead to biases that are large and persistent (S174, S177, Tv74).

(b) People's perceptions of risks are often inaccurate
Of the heuristics that people use in probabilistic thinking, one, the "availability heuristic," has special relevance to risk perception. Users of the "availability heuristic" judge an event to be likely or frequent if it is easy to imagine or recall relevant instances of that event. Instances of frequent events are typically easier to recall than instances of less frequent events, and likely occurrences are easier to imagine than unlikely ones. Thus availability is often an appropriate cue for judging frequency and probability. However, since availability is also affected by numerous factors unrelated to likelihood, reliance on it may lead to overestimation of probabilities for recent, vivid, emotionally salient or otherwise memorable or imaginable events. In the extreme, any factor that makes a hazard unusually memorable or imaginable, such as a recent disaster or a sensational film (e.g. Jaws or The China Syndrome), could seriously distort that hazard's perceived risk.
The biasing effects of availability may be seen in a study of the perceived frequency of various causes of death (Li78). That study demonstrated that the frequencies of dramatic or sensational causes of death, such as accidents, homicide, cancer, botulism and tornadoes, were greatly overestimated. Frequencies of undramatic causes, such as asthma, emphysema and diabetes, which take one life at a time and are common in nonfatal form, were greatly underestimated. News media coverage of fatal events has been shown to be biased in much the same direction, thus contributing to the difficulties of keeping proper mental accounts of everyday risks (Co79a).
Another important type of misperception is the tendency to consider ourselves personally immune to many hazards that we admit pose a serious threat to others. In a report titled, "Are We All Among the Better Drivers?", Svenson showed that most people rate themselves as among the most skillful and safe drivers in the population (Sv79). This effect does not seem to be limited just to driving. Rethans (Re79a) found that most people rated their personal risk from each of 29 consumer products (e.g. knives, hammers) as lower than the risk to other individuals. Ninety-seven percent of Rethans' respondents judged themselves average or above average in their ability to avoid both bicycle and power mower accidents. Weinstein (We80) found that people were unrealistically optimistic when evaluating the chances that a wide variety of good and bad life events (e.g. living past 80, having a heart attack) would happen to them.
Although the determinants of such personal optimism are not well understood, we believe that several contributing factors can be identified. First, the hazardous activities for which personal risks are underestimated tend to be seen (exaggeratedly) as under the individual's control. Second, they tend to be familiar hazards whose risks are low enough that the individual's personal experience is overwhelmingly benign. Automobile driving is a prime example of such a hazard. Despite driving too fast, tailgating, etc., poor drivers make trip after trip without mishap. This personal experience demonstrates to these drivers their exceptional skill and safety. Moreover, their indirect experience via the media shows them that when accidents do happen, they happen to others. Given such misleading experiences, people may feel quite justified in refusing to take protective action such as wearing seat belts (S178).

(c) Risk information may frighten and frustrate the public
The fact that perceptions of risk are often inaccurate points to the need for educational programs. However, to the extent that misperceptions are due to reliance on imaginability as a cue for probability, such programs may run into trouble. Merely mentioning possible adverse consequences of radiation could enhance their perceived likelihood and make them appear more frightening. Anecdotal observation of attempts to inform people about recombinant DNA hazards supports this hypothesis (Ro78), but controlled research is needed to test it more adequately. To the extent that imaginability can blur the distinction between what is (remotely) possible and what is probable, information materials will have to be designed with great care.
Other psychological research shows that people have great difficulty making decisions about gambles, when they are forced to resolve conflicts generated by the possibility of experiencing both gains and losses, and uncertain ones at that (Li73). As a result, people often attempt to reduce the anxiety generated in the face of uncertainty by denying that uncertainty, thus making the risk seem so small it can safely be ignored or so large that it clearly should be avoided. They rebel against being given statements of prob-ability, rather than fact; they want to know exactly what will happen. Thus, just before hearing a blue-ribbon panel of scientists report being 95% certain that cyclamates do not cause cancer, former Food and Drug Administration Commissioner Alexander Schmidt said, "I'm looking for a clean bill of health, not a wishy-washy, iffy answer on cyclamates." Likewise, former Senator Muskie once called for "one-armed" scientists who do not respond "on the one hand, the evidence is so, but on the other h a n d . . ." when asked about the health effects of pollutants.
Given a choice, people would rather not have to confront the gambles inherent in living with radiation. They would prefer being told that radiation is managed by competent professionals and is thus so safe they need not worry about it. However, if such assurances cannot be given, they will want to be informed of the risks, even though doing so might make them anxious and conflicted (e.g. Fi81, We 79a).

(d) Strong beliefs are hard to modify
The difficulties of facing life as a gamble contribute to the polarization of opinion about technologies such as nuclear power or genetic recombinations; some view these technologies as extraordinarily safe, while others view them as catastrophies in the making. It would be comforting to believe that such polarized positions would respond to informational and educational programs. Unfortunately, psychological research demonstrates that people's beliefs change slowly and are extraordinarily persistent in the face of contrary evidence. Once formed, initial impressions tend to structure the way that substantive evidence is interpreted. New evidence appears reliable and informative if it is consistent with one's initial belief; contrary evidence is dismissed as unreliable, erroneous or unrepresentative. Thus, depending on whether one is predisposed to favor nuclear power or oppose it, efforts to reduce nuclear hazards may be interpreted to mean either that the technologists are re-sDonsive to the Dublic's concerns or that the opponents of nuclear power viewed the accident at Three Mile Island as proof that nuclear reactors are unsafe, proponents claimed that it demonstrated the effectiveness of the multiple safety and containment systems.

(e) Presentation format is vitally important
The precise manner in which risks are expressed can have a major impact on perceptions. For example, an action increasing one's annual chances of death from 1 in 10,000 to 1.3 in 10,000 would probably be seen as much more risky if it were described, instead, as producing a 30% increase in annual mortality. Numerous effects of presentation format have been documented in the literature on risk assessment (Fi78, Tv81). Here, we shall present but two examples. The first is based on a pair of problems that Tversky and Kahneman (Tv81) gave to two groups of college students. Each problem had two options and respondents were asked to indicate which option they would choose. Problem 1. Imagine that the US. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimate of consequences of the programs are as follows: If Program A is adopted, 200 people will be saved.
If Program B is adopted, there is 1/3 probability that 600 people will be saved; and 2/3 probability that no people will be saved. Which of the two programs would you favor? Problem 2. (Same cover story as Problem If Program C is adopted, 400 people will die.
If Program D is adopted, there is 1/3 probability that nobody will die, and 2/3 probability that 600 people will die. Which of the two programs would you favor? Seventy-five percent of these respondents chose Program A over Program B and 67% chose Program D over Program C, even though A and C are identical options, as are B and D. Thus, the preference patterns were risks are indeed great. Similarly, whereas reversed by the simple change from lives saved to lives lost. Groups of physicians have been found to exhibit similar reversals.
A second demonstration of the importance of presentation format comes from a study of attitudes towards the use of automobile seat belts (S178). Drawing upon previous research showing that the probability of loss was more important than the magnitude of loss in triggering protective action, Slovic et al. argued that motorists' reluctance to wear seat belts might be due to the extremely small probability of incurring a fatal accident on a single automobile trip. Since a fatal accident occurs only about once in every 3.5 million person trips and a disabling injury occurs only about once in every 100,000 person trips, refusing to buckle one's seat belt may seem quite reasonable. It looks less reasonable, however, if one adopts a multiple-trip perspective and considers the substantial probability of an accident on some trip. Over 50 years of driving (about 40,000 trip?), the probability of being killed rises to 0.01 and the probability of experiencing at least one disabling injury is 0.33. Slovic et al. found that people who were asked to consider this lifetime perspective responded more favorably toward the use of seat belts (and air bags) than did people asked to consider a trip-by-trip perspective.
The fact that subtle differences in how risks are presented can have marked effects suggests that people who inform others have considerable ability to manipulate perceptions. Indeed, since these effects are not widely known, people may inadvertently be manipulating their own perceptions by casual decisions they make about how to organize their knowledge.

PLACING RADIATION RISKS IN PERSPECTIVE
We have attempted to demonstrate some of the difficulties people have in comprehending and estimating risks. Some observers, cognizant of these difficulties, have concluded that the problems are insurmountable. We disagree. Although the broad outlines of the psychological research just described seem to support a pessimistic view, the details of that research give some cause for optimism. Upon closer examination, it appears that people understand some things quite well, although their path to knowledge may be quite different from that of the technical experts. In situations where misunderstanding is rampant, people's errors can often be traced to inadequate information and biased experiences, which education may be able to counter.
There appears to be widespread agreement within the technical community that appropriate presentations of factual material, within a comparative framework, can go a long way towards educating the public and providing a sound basis for standard setting as well. In particular, comparisons of the radiation from different sources (including nature) or of the risks from radiation and other hazards have been advanced as exemplary methods for instilling proper perspectives. In this section, we shall briefly examine and critique three popular methods of comparison.
(a) Sources of exposure One type of comparative table partitions the average annual amount of radiation exposure according to source (natural vs technological; environmental, medical, occupational, etc.). Such tables indicate that the largest sources and the largest artificial exposures come from diagnostic X-rays. Of course, such presentations are only as useful as they are accurate. Recent research has revealed a major source of radiation not listed in most tables, namely that due to radon gas emanating from construction materials and accumulating in closed buildings. Myers and Newcombe (My79), for example, report that radon gas may be the major source of public radiation exposure, perhaps accounting for between 5 and 20% of all lung cancer deaths.

(b) Natural standards
Another approach to placing risks in perspective assumes that the optimal (or acceptable) level of exposure to a hazard is the level characteristic of the conditions in which the species evolved. Radiation standards have been based on this principle. For example, Adler (quoted by Weinberg) proposed: . . . rather than trying to determine the actual damage caused by very low radiation insult, and then setting an allowable dose, one instead compares the man-made standard with the background. Since man has evolved in the midst of a pervasive radiation background, the presumption is that an increment of radiation "small" compared to that background is tolerable and ought to be set as the standard. [Adler] suggests that small, in the case of y radiation, be taken as the standard deviation of the natural background-about 20 mrad/yr (We79, p. 16). One attractive feature of such natural standards is that they can be set in the absence of precise knowledge of dose-response curves; another is that they avoid the problems of converting risks into a common unit (like workdays lost). Nonetheless, comparisons with the natural background, whether for purposes of education or standard setting, must face several criticisms. One is the fact that our natural exposure to many hazards has not diminished. Thus, whatever new exposure is allowed comes in addition to what we already receive from nature and thereby constitutes excess "unnatural" exposure. A second problem arises when the technology produces multiple sources of exposure. In principle, each such exposure could constitute a small, hence acceptable, increment over background exposures. Natural standards do not provide any clear criterion for deciding that the cumulative impact of a set of tolerable exposures is intolerable.
When defining as acceptable any activity whose risks are only slightly above natural levels, problems of definition become important. Aggregation or disaggregation of several sources of exposure can mean the difference between having several technologies, each within the limits of acceptance, or one technology outside the limits. Without clear guidelines, a consequential event could be redefined as a set of inconsequential events.

(c) Cross-hazard comparisons
The third approach to risk education is to present quantified risk estimates for a variety of hazards. Presumably, the sophistication gleaned from examining such data will be useful not only for broadening one's perspective but for decision making as well. Wilson (Wi79) observed that we should "try to measure our risks quantitatively. . . Then we could compare risks and decide which to accept or reject" (p. 43). Likewise, Sowby (So65) argued that to decide whether we are regulating radiation hazards properly, we need to pay more attention to "some of the other risks of life." Typically, such exhortations are followed by elaborate tables and even "catalogs of risks" in which diverse indices of death or disability are displayed for a broad spectrum of life's hazards. Thus Sowby (S065) provided extensive data on risks per hour of exposure, showing, for example, that an hour riding a motorcycle is as risky as an hour of being 75 yr old. Wilson (Wi79) developed a table of activities (e.g. flying 1000 miles by jet, having one chest X-ray), each of which is estimated to increase one's annual chance of death by 1 in one million (which in the case of accidental death would decrease one's life expectancy by about 15 min). In similar fashion, Cohen and Lee (Co79) ordered many hazards in terms of their reduction in life expectancy on the assumption that "to some approximation, the ordering should be society's order of priorities. However, we see several very major problems that have received very little attention. . . whereas some of the items near the bottom of the list, especially those involving radiation, receive a great deal of attention" (Co79, p. 720). A related exercise by Reissland and Harries (Re79) compared loss of life expectancy in the nuclear industry with that in other occupations.
Although such risk comparisons may provide some aid to intuition, they may not educate as effectively as their proponents believe. For example, although some people may feel enlightened upon learning that a single takeoff or landing in a commercial airliner takes an average of 15 min off one's life expectancy, others may find themselves completely bewildered by such information. When landing or taking off, one will either die prematurely (almost certainly by more than 15 min) or one will not. From the standpoint of the individual, averages do not adequately capture the essence of such risks.
Furthermore, research on risk perception (e.g. S180) shows that perceptions and attitudes are determined not only by accident probabilities, annual mortality rates and losses of life expectancy, but also by numerous other characteristics of hazards such as uncertainty, controllability, catastrophic potential, equity and threat to future generations. Within the perceptual space defined by such characteristics, each hazard is unique. A statement such as "the annual risk from living near a nuclear power plant is equivalent to the risk of riding an extra 3 miles in an automobile" fails to consider how these two technologies differ on many qualities that people believe to be important. As a result, such statements are likely to produce anger rather than enlightenment.
In sum, comparisons across hazards and comparisons with natural levels of risk may be useful tools for educating the public. Yet the facts do not speak for themselves, except for those who already know what they want to hear. Comparative analyses must be performed with great care to be worthwhile. Even then, the insights they provide may be limited.

WHAT CAN RESEARCH TELL US?
To be effective, any information program must be buttressed by extensive empirical research designed to indicate what people know, what they want to know, and how best to convey that information. For example, some have speculated that people shy away from information of a threatening nature. However, psychologist Neil Weinstein (We79a) found the opposite reaction when people were given the opportunity to choose between a reassuring and a threatening message about environmentally induced cancer. Specifically, he found that: 0 People were more interested in learning what the hazard might be than in receiving information minimizing its danger. 0 Failure to seek information reflected a lack of interest in the topic rather than an attempt to avoid the topic because it was too threatening. 0 Lack of information, even if acknowledged, did not necessarily lead people to seek out information. 0 When conflicting messages were available regarding the existence of a hazard, people tended to select the message that agreed with their own point of view.
A study by Baruch Fischhoff (Fi8l) provides a detailed example of the way in which research can be carried out and the kinds of insights such research might provide. Fischhoff was concerned about how best to inform temporary workers in the nuclear industry about the radiation risks they faced when performing tasks in contaminated areas.

(a) Design of the study
With the help of physicist Christoph Hohenemser, Fischhoff designed a pamphlet to inform temporary workers with the reading skills of high school graduates.* This pamphlet included a definition of "maximum permissible quarterly dose" which included comparisons with other exposures, a best guess at the risks of death and genetic damage incurred by such exposure, and an acknowledgement that experts disagree on these effects, with the present "best guess" expressing less risk than that believed to be the case by a minority of experts.
Each of four versions of the statement was presented to a different group of 50-60 individuals. Respondents were recruited by advertising in a university newspaper and at a state employment office. Although a somewhat special population, these individuals are not entirely unrepresentative of the (unskilled) laborers who might be confronted with the nuclear work option. About 20% reported having worked in high-risk environments in the past. After reading the statement, participants in the study made judgments in four categories: (a) appropriate pay for the job, (b) the nature and extent of the risks, (c) current and desired exposure standards, and (d) the quality of the statement and strategies for its administration.
Three factors were varied in creating the four versions: (1) Whether readers were told why temporary workers were being used. A long form included this information, a short form excluded it.
(2) How the administration of the statement was described. Most readers were told nothing about its administration; one group was asked to imagine receiving it when arriving at the nuclear facility.
(3) How pay questions were positioned. Most groups were asked to judge appropriate pay levels immediately after reading the statement; one group was asked about pay after answering questions on the other three topics. It was felt that answering the other questions might help elaborate the decision situation and affect respondents' attitudes towards pay.

(b) Results
The pamphlet was moderately well regarded by readers. They viewed it as readable, straightforward, and fairly honest. Moreover, respondents were adamant about the need for presenting such information. More than 80% answered "definitely yes" to the question: "If you had taken such a job without being shown this pamphlet, would you feel that you had been deprived of necessary information?" A majority responded "definitely no" to the question: "Is this too much information?" Most wanted even more information than was in the pamphlet. As Weinstein's study also demonstrated, people want to be told.
Respondents also had definite ideas about when such information should be presented. Almost 90% said that it should be shown when workers originally report to the personnel ofice (off site); 88% viewed it as "very inappropriate" to present it only when workers asked for it explicitly. When asked how the presentation of risk information could have been improved, almost all respondents had definite opinions. They wanted more information and more elaborate presentations. The most common requests were for information about the specific plant and its safety record, additional research results, and a chance to discuss the topic with other workers and specialists.
In a variety of ways, the participants were asked to evaluate the nature and magnitude of the job's radiation risk. They generally felt that the risks were neither very well nor very poorly understood by themselves or by scientists. They judged the risks to be equivalent to those incurred in a similar period of time spent in a coal mine, but worse than those encountered in activities such as domestic work or driving. They generally believed that it was quite likely that the standards reported in the booklet would be exceeded by accident, that there was no amount of radiation so small as to present no danger, and that a worker would not be able to tell at the end of the job whether any damage had been suffered.
When asked about standards, these people felt that current standards were not stringent enough and that it was unreasonable to design plants calling for such exposure of temporary workers. Three-fifths believed that companies set standards; only one-fifth believed that they should. Perhaps even more surprising was that, whereas 87% believed that government officials currently set standards, only 55% believed that they should. What groups are judged to be underrepresented in standard setting? The public is one; 7% of respondents believed that the public was involved, 46% believed that it should be. Scientists and the courts were judged to be underrepresented to lesser extents.
When asked about fair pay for the job, the median response was $100 for one day. When asked about the lowest pay that they personally would require to take the job, half of all respondents reported being unwilling to take the job at any price. About half of those who categorically refused the job at the described risk levels were willing to accept it at $50 per day if the risks were reduced by a factor of 25-100. Wage demands (but not judgments of risk) were reduced by a variant of the pamphlet that asked respondents to imagine that they had received it upon reporting to work in the morning and by a variant that did not tell them that they were being exposed to save permanent workers.

(c) Extensions
This particular pamphlet was but one attempt to present the facts fairly. Alternative versions could easily be prepared and it would be interesting to study their impact. For example, one might indicate who sets the standards and the likelihood that they will be exceeded by accident, what it is like to have cancer, or what the cure rates are. One could also detail the opinions of that minority of experts who believe that cancer risks from radiation exposures are higher than those indicated in the pamphlet or describe the views of those experts who believe that there is a threshold below which effects are absent. One could describe acceptance rates among other workers, the plant's financial situation (does it make a profit on the labor of these temporary workers?), other life events that cause cancer, or the risks from alternative jobs. Any of these variations could affect judgments of risk, equity, or bargaining power. Respondents' requests for additional information or alternative modes of presentation suggest that some such variations would be welcome. Nonetheless, the present pamphlet might not be too dissimilar in length and balance from what might eventually appear in real situations.
Although this particular study was concerned with informing just one particular category of worker, we believe that similar research should be done in conjunction with all programs to inform workers, patients or members of the general public.

HOW AND BY WHOM SHOULD INFORMATION BE PROVIDED?
Radiation information programs have enormous potential to influence the behavior of workers, patients, and citizens. The stakes are high-jobs, electricity costs, willingness of patients to submit to treatments, public safety and health, etc. Potential conflicts of interest abound. Responsibility for information programs should not be left solely to the natural triumvirate of science, industry and government, lest these programs run the risk of being viewed as propaganda compaigns. Since every decision about the design of an information statement can influence perception and behavior, extreme care must be taken to select knowledgeable and trustworthy designers and program coordinators. We cannot propose a general selection procedure here, as a competent and credible program staff would have to be put together in consultation with representatives of the people who were to be informed. If people do not trust their informants, there is little point in pursuing the program.