5.1 The application of information science, technology, and management to public health
Oxford Textbook of Public Health
The application of information science, technology, and management to public health
Denis J. Protti
Health information science
The evolution of information technology
Technology and society
The evolution of the health-care industry
Computer-based group support systems
Data capturing technology
Data storage and retrieval technology
Fundamental changes due to changes in information technology
Integration of business functions
Shifts in the competitive climate
New strategic opportunities
Management of public health organizations: global competition
Case study: the United Kingdom’s NHS information management and technology strategy
Conclusion—questions to be answered
The field of public health has greatly benefited in the past and will benefit even more in the future from the effective application of the principles of information science and information management, and the effective implementation of information technology. Public health practitioners have at times been required to avail themselves of technology and systems designed to meet the requirements of the private sector or the acute care medical sector. As Friede et al. (1994) point out, public health information requirements are different and their needs unique. In the traditional clinical setting, the focus is on the single patient; in the public health setting, the focus is on the population.
Numerical information systems developed for patient care or the clinical laboratory are typically oriented towards facilitating the entry and review of a single record or of several hundred records of subjects in a study. In contrast, public health practitioners often need to examine thousands of records, although they may not need detailed information for each individual but only summary information about the population. In addition, holders of data are often eager to share selections of their data with others, and to engage in collaborative studies.
Textual information systems that describe the experimental medical literature are easily accessed through MEDLARS (Medical Literature Analysis and Retrieval System) and software packages such as Grateful Med. In contrast, searching the corresponding public health literature is difficult because government publications at all levels are not listed in Index Medicus, are not centrally stored, and have extremely variable formats and lengths. A further complicating factor is the paucity of public health oriented keywords in the Medical Subject Headings (MeSH) system; hence making the public health literature available in full text searchable form is an important way to provide access to it (Friede et al. 1994).
The data analysis needs of the clinically based epidemiologist often differ from those of public health professionals in health departments. The clinically based epidemiologist collects and analyses data from chart reviews and clinical trials, and needs software that supports non-parametric statistics and time-series analysis. However, the public health worker collects data from surveillance systems, population-based surveys, and outbreak reports and needs software that can be used to perform standardization, fit mathematical models to disease patterns, analyse data from complex surveys, and draw maps.
The most significant difference is perhaps in the area of communications. The clinician needs to communicate with patients, the clinical laboratory, and colleagues who are typically close by. The amount of information to share is often small (a status, a recommendation) and urgency is often high; telephones and beepers fit these needs. Conversely, the public health practitioner needs to communicate with colleagues in the state or district laboratory, federal agencies, and research collaborators at many geographically separated sites; large groups may be called upon to make decisions. The amount of information to share is often large but urgency is rarely high. E-mail and video teleconferencing are often more appropriate technologies that fit these needs.
Before exploring the application of information management principles and the impact of information technology in the field of public health, it is important first to understand the foundations on which information science is built.
The most universal definition of information comes from philosophy—information is knowledge for the purposes of taking effective action (Meadow 1979). In his original treatise on cybernetics, Wiener (1948) compared the acquisition and processing of information in human beings and animals with the similar activity in the control of machines and other activities. Many have attempted to define information. A few examples follow:
an increment of knowledge
an interpretation of external stimuli
increasing the state of knowledge of a recipient
a reduction in uncertainty, following communication
any physical form of representation, or surrogate of knowledge, or of a particular thought, used for communication
a measure of one’s freedom of choice when one selects a message
recorded experience that is, or can be, used in decision-making.
The last definition is that of Churchman (1971) who postulated that recorded experience becomes information only when it is or can be applied to a decision process. Hence it is possible to have access to large amounts of descriptive raw data but yet have little or no information. In an engineering or information theory sense, information is the capacity of a communications channel, a measurable quantity that is independent of the physical medium by which it is conveyed. Applying this theory to Churchman’s definition enables the measurement of the amount of information than can be obtained from a particular piece of raw data or descriptive material. In an ‘information system’ sense, information is generally considered to be data (raw material) that has been processed into a form that is meaningful to the recipient and is of real or perceived value in current or prospective decisions. Wiener regarded communication between the component parts of a community as vital to its activities. He saw an information system as the means by which the necessary communication can be established and maintained.
There appears to be no consensual definition of information. Information is a complex concept, and simplistic views of it lead to simplistic decisions. In a world of uncertainty, information reduces uncertainty. It changes the probabilities attached to expected outcomes in a decision situation and therefore has value in the decision process. It is so closely related to the concepts of thought, values, knowledge, and environment, that it is often difficult to isolate and adequately define ‘information’. Cybernetics, which is concerned with the use of information to effect certain control actions, is but one of many fields which claims to study information.
Wiener (1948) first suggested information theory. His contention that any organism is held together by the possession of means for acquisition, use, retention, and transmission of information denotes a biological sense of information. In their paper, ‘A mathematical theory of communication’, Shannon and Weaver (1960) provided the foundation for measuring information (in a non-semantic sense). Their concern was not with meaning, or the semantic aspects of information, but with the engineering problems of transmitting it. Information theory, as ascribed to by Shannon and Weaver, is only concerned with the factors that determine whether or not a message has been exactly or approximately transferred between a source and the destination. The principal elements of Shannon and Weaver’s communications system can be delineated as follows.
Information source: the originator of the messages, which are to be transferred to the destination. There are an almost unlimited variety of permissible message types. Typewriter-like systems use sequences of letters. Bank cheques are messages, which are composed of letters and numbers.
Transmitter: operates on the message to transform it into a signal form, which can be transmitted over the communication channel (path).
Channel: the communication path over which the signal is transmitted to the receiver.
Receiver: usually performs the inverse function of the transmitter to yield a reconstruction of the message.
Destination: the intended termination of the message transfer.
A noise source is included in the model as unwanted signals in one form or another perturbs all systems. There is a distinction between noise and distortion. Distortion is caused by a known (even intentional) operation and can be corrected by an inverse operation. Noise is random or unpredictable interference. Shannon and Weaver identified three levels of problems in a communication system:
Technical accuracy: just how accurately are the message symbols transferred from the message source to the destination?
Semantic accuracy: how accurately is the semantic meaning of the messages transferred from the message source to the destination? These semantic problems are concerned with how closely the destination interprets the knowledge conveyed by the message to the knowledge intended by the sender.
Effectiveness: how effectively does the received message control the system in the intended fashion?
The primary motivation for communication within a system is to instruct selected subsystems to take some course of action. Effectiveness is closely related to semantic accuracy, and the two problems cannot always be completely dissociated. In fact, it is not uncommon to discover situations in which it is either entirely impossible or meaningless to separate the three problem levels. No real communication can take place unless the transmitter and the receiver are making use of compatible codes or schemes for symbolic representation of information, for example a bridge player failing to ‘catch’ and respond in an appropriate manner to his partner’s bidding signal.
Although information science is not a direct descendant of information theory, many of its practitioners do attempt to retain the spirit of information theory by making information the central concept and by providing precise definitions of what it is. In information theory, the concept of information was never meant to express the meaning of a message; Shannon and Weaver, in fact, clearly stated that semantic concepts were quite irrelevant to the problem. Yet to many, information science is interested in the meaningfulness of information and in the usefulness of information to the user. It is a field of study which investigates how systems, humans, and/or machines retrieve information rather than just receive information. Humans are active rather than passive; they search for information for a specific purpose and do not just wait to process it, should it happen to pass by (Radford 1978).
Webster’s Dictionary defines ‘information science’ as the collection, classification, storage, retrieval, and dissemination of recorded knowledge, treated both as a pure and an applied science. Although Webster’s considers ‘informatics’ to be synonymous with information science, the literal translation of the French term informatique and the German term informatik is ‘the rational scientific treatment, notably by computer, needed to support knowledge and communications in technical, economic, and social domains’. The significant difference between the English and European definitions is the latter’s inclusion of the computer. It should also be noted that the European definitions of informatics make no explicit claim to its being a science.
The field of information science is perhaps best exemplified by Meadow (1979) who views it as a study concerned with:
the nature of information and information processes
the measurement of information (including its value) and information processes
the communication of information between humans and information machines
the organization of information and its effect on the design of machines, algorithms, and human perception of information
human behaviour in respect to the generation, communication, and use of information
the principles of design and measurement of the performance of algorithms for information processing
the artificial intelligence applied to information processing.
Before discussing the broader concept of information science in public health, a historical review of the term medical informatics is in order.
Over the past 25 years, many have published their impressions and opinions as to what constitutes the field of medical informatics. One of the first to use the term was Reichertz (1973), a doctor, who defined medical informatics as the science of analysis, documentation, steering, control, and synthesis of information processes within the health-care delivery system, especially in the classical environment of hospitals and medical practice.
Over 25 years ago, Shires and Ball (1975) confidently wrote that ‘[t]he year 1975 will be noted as the year in which medical informatics became accepted as a legitimate term used to describe activities involved in assembling, correlating and making effective use of information and decision making in health care delivery’. Moehr et al. (1979) also observed that informatics as a science does not fit into the conventional classification of sciences: it neither belongs to the natural sciences—its objects are not phenomena of nature—nor is it a part of mathematics. It is not a human science nor is it one of the classical engineering sciences. In their opinion, informatics deals with investigating the fundamental procedures of information processing and the general methods of the application of such procedures in various application areas.
Another doctor, Levy (1977) defined medical informatics as the acquisition, analysis, and dissemination of information in health-care delivery processes. He concluded that on the grounds of relevance and direct appropriateness to modern medicine, informatics is a proper basic medical science. Van Bemmel (1984), who wrote that medical informatics comprises the theoretical and practical aspects of information processing and communication, based on knowledge and experience derived from processes in medicine and health care, expressed a similar view. This definition is tempered somewhat by Hannah (1985) who reported that nurses continue to consider the term ‘medical’ to be synonymous with the word doctor. A relatively new but highly related term is that of nursing informatics. In using the term, Hannah refers to the use of information technologies in relation to any of the functions which are within the purview of nursing and which are carried out by nurses. As evidenced by the above definitions, the term medical informatics on the one hand appears to be confined to the clinical practice of medicine while on the other it encompasses the broader notion of health and health-care delivery. The term public health informatics is now surfacing as evidenced by the National Forum ‘Accessing Useful Information: Challenges in Health Policy and Public Health’ held at the New York Academy of Medicine in March 1998.
Health information science
After an extensive review of the literature and a critical analysis of the aims and objectives of the University of Victoria’s new baccalaureate degree programme in health information science, Protti (1982) defined health information science as the study of the nature of information and its processing, application, and impact within a health-care system. This definition was not intended to be unique and mutually exclusive of the work of others. Rather it was an attempt to broaden the Reichertz domain of hospitals and medical practice to encompass all of health care. Health information science is to information science as health economics is to economics. An economist is one who specializes in the social science concerned chiefly with the description and analysis of the production, distribution, and consumption of goods and services. A health economist, upon familiarizing himself with the institutions, participants, and concepts of health, illness, and disease, analyses economic phenomena in health-care delivery and resource management settings. Similarly, a computer scientist is concerned with the science of properties, representation, construction, and realization of algorithms. A medical computer scientist is concerned with the application of these concepts to medical science and medical practice. To be effective, he or she must have more than a passing acquaintance with concepts of diagnosis and treatment of disease.
An engineer is concerned with the application of science and mathematics by which the properties of matter and the sources of energy in nature are made useful to many in structures, machines, products, systems, and processes. A biomedical engineer is concerned with the capacity of human beings to survive and function in abnormally stressing environments and with the protective modification of such environments.
In keeping with Meadow’s views, a health information scientist or health informatician should therefore be concerned with:
the nature of information and information processes in all aspects of health promotion, detection, and delivery of care
the measurement of information and information processes
the organization of information and its effect on the performance of health practitioners, researchers, planners, and managers
the communication of information between patients, health-care providers, administrators, evaluators, planners, and legislators
the behaviour of patients, health-care providers, administrators, planners, and legislators, in respect to the generation and use of information.
Many of health information science’s conceptual foundations are borrowed from other fields such as mathematics, economics, psychology, engineering, sociology, and biology. It is a discipline which is not distinctively different from those in subject content, but is different in outlook. Information science in health is concerned with the individual and group behaviour of health-care personnel in their interaction with information and with the technology, which processes information.
What are the major issues surrounding this complex and rapidly changing subject? To what extent will information technology affect the public health profession over the next 20 years? Rather than presume to have definite answers, this section will raise questions which readers will have to answer for themselves.
The section is structured so as to develop the following premises:
information technology is part of the larger domain of ‘technology’
the impact that informatics will have on public health can be seen by observing the impact technology is having on society.
Although technologies such as hydroponics, genetic engineering, and nuclear fission are important in the overall scheme of things, they will not be discussed in this section.
One must resist the temptation to predict unrealistically. A little more than a hundred years ago, the American Press Association organized a group of 74 leading authors, journalists, industrialists, business leaders, engineers, social critics, lawyers, politicians, religious leaders, and other luminaries of the day to give their forecasts of the world 100 years later. Among the most striking features of the 1893 forecasts is the remarkable paucity of predictions that actually came true. Some of them seem outlandish and completely disconnected from reality—but fervently believed by their authors (Denning 1999). Predictions of what the world will be like 30 to 40 years from now are easy—the predictors need not worry about being around to defend their views. The nearer to the present, the more difficult the task for the political, social, economic, and emotional issues which influence change are much more apparent. This section will attempt to identify the issues which will probably affect public health over the next 5 to 10 years. The extent to which one agrees with someone else’s views of the future is very much influenced by one’s own view of the past and present, and by how well one understands the issues and is challenged or threatened by their implications. One consolation is that all health professionals have to wrestle with the same questions.
The evolution of information technology
Information technology is not a new phenomenon. It has been around since the beginning of time. It entails people communicating with each other, and recording their thoughts, ideas, and actions for others to read or hear. The broad definition of information technology includes:
computers (mainframes to workstations, desktop personal computers, and multimedia)
telecommunications (switching systems to faxes)
networks (local area and wide area)
artificial intelligence and speed recognition expert systems.
In understanding information technology in a modern context, it is important to realize that the electronic computer is only one component in an elaborate and highly differentiated infrastructure. This infrastructure has grown through a succession of generations of computers, each of which represent a major change in technology. During the 8-year span of each computing generation (the first generation started in the late 1940s and the fifth in the early 1980s), revolutionary changes have taken place that correspond to those taking place over some 70 years or more in the aircraft industry. If we were to draw parallels to the rapid and massive advancements, aircraft would be able to go 100 times faster, a $200 000 home would cost $20 000, and colour televisions would cost $20.
The definition of generations in terms of electronic device technology captures important aspects of computing technology such as cost decreases, size decreases, power increases, and so on. However, it fails to account for the qualitative changes that have given computing its distinct character in each generation. The change from mechanical to electronic devices made it possible to store programs as data and enabled the use of computers as a general-purpose tool and then the development of programming language compilers. The transistor made reliable operation possible and enables routine electronic data processing and then interactive time-sharing. Integrated circuits reduced costs to the level where computers became commonplace and made possible the personal computer dedicated to the single user.
Each generation represents a revolution in technology with a qualitatively different impact. Each generation subsumes the capabilities of that preceding it, providing very much better facilities at very much lower cost, and adding new capabilities not possessed by the previous generations. One of the innovative new capabilities has been in the area of knowledge-based systems. The products stemming from breakthroughs in this area are expert systems, which simulate some of the processes of the human mind, knowledge representation, and inference, allowing expertise to be encoded for a computer and made widely available. This has generated a new industry based on creating expert systems to make the practical working knowledge of a human expert in a specific subject area such as medicine widely available to those without direct access to the original expert.
Technology and society
Society is experiencing its second major revolution in less than 200 years. The first was the Industrial Revolution of the nineteenth century, which saw the substitution of mechanical processes for human muscles. It changed the nature of work, though not the size of the workforce, and with it society’s view of human values. The spinning-jenny may have done the work of 1000 women, but hundreds of thousands were eventually needed in the mills. The automobile may have put the horse out of business, but Henry Ford saw to it that many more mechanics were needed than blacksmiths, many more oil industry personnel than haymakers. Although it had a significant impact on the nature of work, the Industrial Revolution did provide untold opportunities for the individual to hold a job at some level. Even if the job was classified as unskilled labour, the person still had an identity as a breadwinner, and could feel a sense of worth from that. If that industrial job was classified as skilled labour, the person had not only the benefits of the unskilled labourer, but in addition a higher job status. As a rule, every major technological advance destroys the civilization that existed at the time of its introduction into everyday life. The steam engine pushed us out of the field, in huge crowds in darkened halls; television returned us to our own darkened living rooms. The compass and chronometer made intercontinental travel possible, the airplane makes it trivial, and advances in communications technology may make it unnecessary.
The second major revolution is the so-called electronic or information revolution in which electronic circuits are being substituted for human mental skills. The electronic revolution is not only replacing the mental processes of the unskilled labourer, but is creating a genuine human value dilemma for technologists, managers, and professionals. Technology is changing everyone’s job. What is both exciting and frightening is that the rate of change does not appear to be diminishing. As put by Kaiser:
We are at the cusp of a new century, but the alteration we are about to undergo is much more than a change in digits. It is far greater than the incremental steps—in science, art or engineering—that each century has so far brought us. It will be a quantum leap in consciousness, a dramatic step forward. The Internet, the electronic global brain, is behind this revolution. It will bring us to a new consciousness because it will allow us to share all the information we are able to gather from cultures past and present. And this sharing of information, this global conversation, will change the consciousness of the planet. (Kaiser 1999)
The fundamental economic activities of our society—agriculture and the multitude of extractive, manufacturing, and service industries—continue, but a new decision-making process increasingly influences them. Vastly more information (on markets, costs, techniques, other options) is being made available to decision-makers because of the information technology now available. This information is being eagerly sought because more informed decisions, be they in politics, operating factories, hospitals, public health agencies, or any organization, are likely to produce better results. The electronic revolution has made robotics a reality. The development and use of intelligent robots, which perform delicate tasks that once could have been done by thinking human beings, is increasingly commonplace in manufacturing sectors of society. Robotics is beginning to be introduced into health care. How long before they become commonplace?
While the industrial age found its symbol in the factory, the symbol of the information age is the computer, which can hold all the information in the Library of Congress in a machine the size of a small refrigerator. Alternatively, its proper symbol may be a robot, a machine capable of supplementing age-old manual labour and liberating human beings from the most arduous and repetitive tasks. Perhaps its symbol is the direct broadcast satellite, which can send television programmes directly into homes around the globe. Telephone companies the world over are joining forces under the banner of the Integrated Services Digital Network (ISDN), which is described as the key to linking all the elements of the information age. ISDN is several things at the same time, but it will allow every home and organization to receive simultaneously voice, computing, and video signals on a telephone line.
A popular way of looking at information technology is in terms of its utility. The most frequently used reasoning to justify purchases of information technology goes as follows: labour expenses are high and getting higher, computer expenses are low and getting lower; it then logically follows that one should always trade an expensive commodity, such as labour, for an inexpensive commodity, such as computers. One of the resulting dilemmas is that value has become less personal and more social or group-oriented. In a technological society, the individual has the potential of becoming insulated against ethical and moral decisions as these responsibilities are projected onto society itself. For people whose identities have been embedded in their jobs, traditional culture provides no guidelines to help them value themselves after they have been more or less excluded from the productive parts of society.
The evolution of the health-care industry
The future of the health-care industry is not the same in all parts of the world. In many parts of the world, the health-care industry is struggling to satisfy the most basic and fundamental of needs. In other parts of the world, the rapid advances in medical science are putting strains on governments to provide the best possible care, given the limited resources available. In the United States and the United Kingdom, the future of the health-care industry is quite clear—the future is competition. Competition is not new, nor is it unique to the Americans and the British; competition has always existed in terms of institutional pride, quality of care, staff prestige, and reputation. In the United States and the United Kingdom, competition is being redefined to include price and marketing as important factors and the key to being competitive is how well information is provided and used.
The increasing emphasis on competition has spurred the movement towards ‘alternate-site’ medicine, that is the delivery of health care outside the traditional, and costly, hospital setting. Of particular interest is the role technology plays in this new movement. Diagnoses that were once run in the hospital or in the large clinical laboratories are now being performed in doctors’ offices in minutes and at a fraction of the cost charged by the large automated laboratories. Increasing numbers of surgical procedures are now performed routinely in outpatient day surgery units and in private surgical centres. Technological advances such as the lithotriptor replace complex and costly major surgery, along with its 10- to 12-day hospital stay, with a 1-day procedure, which ‘shatters’ rather than removes kidney stones. Medical costs are also being reduced by treatments that can be performed by many patients at home.
The market—which includes not only the drugs used in the treatment but also auxiliary equipment, such as small programmable pumps—now consists primarily of special nutritional products and services (aimed at patients with abnormal digestive systems), kidney dialysis, and continuous intravenous drug administration. Home therapies will almost certainly embrace such intractable disorders as Alzheimer’s disease and many forms of physical rehabilitation. One company markets a home chemotherapy system for cancer patients, many of whom would normally have to receive anticancer drugs in a hospital or doctor’s office. Patients who are healthy enough to live at home can often use a continuously administered prepackaged drug or combination of drugs. The drugs are contained in a small plastic pouch that is attached to a catheter. A portable programmable pump delivers the drugs at a slow constant rate. One of the advantages to this approach is that the steady infusion of such drugs often eliminates the side-effects, such as nausea, that usually accompany large doses.
The benefits of such procedures, moreover, extend well beyond lower costs. Recovery or remission rates for many patients are dramatically reduced in the familiar and comfortable home environment. A lens implant in the eyes of a 75-year-old woman allows her to continue to live independently in the home and community, which is meaningful to her. The quality of life is infinitely ‘better’ than moving to a home for the blind in a nearby town or city. Neonatal intensive care units are allowing life to be continued in hundreds of cases in which death would have been a certainty 25 years ago. Microcomputing technology is providing artificial voices for those who cannot speak, workstations for the sightless, and communication for those paralysed by stroke. The elderly, handicapped people, and others with high-risk medical conditions can find a new level of security when their hospitals use a computer and the telephone system to guarantee them almost instant response in an emergency. The list grows with each passing year. What are the implications of these trends on the public health practitioner?
Perhaps the form of modern information technology which will most affect the public health practitioner in the future, is communications technology. The advances in telecommunications are matching the speed of those of computing technology. More importantly the cost of this form of technology is now dropping—after years of overpricing. This drop in cost is no more evident than in the explosion of the Internet.
What is it?
What railroads were to America in the 19th century and superhighway systems were in the 20th, high bandwidth networks are in the 21st century. (Mitchell Kertzmay, CEO Powersoft Corporation)
Ask for a definition of the Internet and, depending on whom you ask, you will get either a simplistic answer or one that is long, detailed, and mainly incomprehensible. The simplest way to describe the Internet is with one word—communication. The Internet is often called a network of networks (Plucauskas 1994). It provides a vehicle for networks of all kinds and individual stand-alone computers to intertwine to form a global network, which connects people the world over. Exactly how many people is not easy to determine. In 1994, it was reported that the number of users with access to the Internet was growing at 10 per cent per month; forecasts were that by the turn of the century there may be 1 million networks, 100 million computers, and 1 billion users on the Internet (Smith and Gibbs 1994). In 1998, it was reported that the number of Internet users outside the United States was growing at an average annual rate of 70 per cent, and would surpass users in the United States by 2002. At that time, there were currently 60 million Internet users worldwide, of whom 68 per cent were in the United States and Canada. Worldwide, the total number of Internet users will reach 228 million by 2002 (Ohlson 1998).
As of July 1999, 205 countries or territories had at least one connection to the Internet. Thus only four new countries joined the Internet in the first 6 months of 1999. This is a diminished Internet spread rate, because there are not many new countries to join. Estimates of the number of people on the Internet seem to range between 50 and 80 million people worldwide, with 3 to 5 million users in Europe. But all that is available are estimates (just as there are only estimates of how many people watched a particular television show). There is no precise way to count the number of people on the Internet.
The users are connected to small local area networks in their offices where they share files and e-mail. Increasingly, these local area networks are being connected to form groups of thousands of computers that are linked across large areas, sometimes referred to as wide area networks. The speed at which one can do things on the Internet is remarkable, not because it is particularly speedy, but because it enables one to travel around the world in seconds. The lure of the Internet is communication and access. People who want to exchange ideas and develop knowledge are increasingly doing it on the Internet (for example, librarians whose job it is to find documents, books, and other materials now share their catalogues through the Internet).
The system that has grown into the Internet was originally designed by the United States military in 1969 under the name Advanced Research Projects Agency. The first Advanced Research Projects Agency configuration involved four computers and was designed to demonstrate the feasibility of building networks using computers dispersed over a wide area. Each computer communication point or node on the Internet is able to pass information on to the next node. Information on the Internet is controlled using a set of data communications standards known as the transmission control protocol/Internet protocol. Protocols are agreed upon methods of communication used by computers similar to the way people have protocols for communicating. The specifics of transmission control protocol are complex (and highly technical) and beyond the scope of this chapter. Suffice it to say that the transmission control protocol/Internet protocol is designed to ensure that every piece of information finds the most direct route to its destination. The hundreds of thousands of nodes around the Internet form a web in which information can travel, thus eliminating the need for central communication switches and means that as long as at least two nodes are in contact, the network will remain operational.
There is no single owner, or even a formal coalition that actually ‘owns’ the Internet. The various subnetworks have owners who recognize that having connections to other networks either enhances their mission or makes their services more desirable. The only group that ‘runs’ the Internet is the Internet Society. The Internet Society is a professional membership society with more than 150 organizational and 6000 individual members in over 100 countries. It provides leadership in addressing issues that confront the future of the Internet, and is the organization home for the groups responsible for Internet infrastructure standards, including the Internet Engineering Task Force and the Internet Architecture Board. A common stake in maintaining the viability and global scaling of the Internet binds the Society’s individual and organizational members. They comprise the companies, government agencies, and foundations that have created the Internet and its technologies as well as innovative new entrepreneurial organizations contributing to maintain that dynamic. Its Board of Trustees elected by its membership around the world governs the Society.
There are many different ways to send and receive information across the Internet. There are also many recent publications that provide in-depth descriptions. Many are available for free around the Internet or can be purchased in the ever-growing number of computer sections in bookstores. The ability to access different tools depends on the type of Internet account and the sophistication of the interfaces users employ to log on (connect) to the Internet.
E-mail is and probably always will be the most common use of the Internet. It allows Internet users to send and receive messages from around the world. Requests for database searches and the result posted to an account can also be done by e-mail. E-mail is also used to join electronic mailing lists (called listservers) on specific topics of interest. E-mail is used to transfer text, program files, spreadsheets, and even photographic images. Messages can be sent and received in hours at most and often within minutes; it is no wonder that most e-mail users refer to the regular postal service as ‘snail mail’.
E-mail is based on the fundamental concept of store-and-forward technology. The store part refers to a message being added to a storage system by the message’s originator. When the recipient is ready the message is forwarded for retrieval. The beauty of this technique is that the recipient does not have to be available when the originator sends the message. This enables the e-mail system to select how the message will move from the place where it is first stored to the place where it is retrieved (forwarded to the user).
It is becoming increasingly easy to find anyone on the Internet—even if one does not know his or her e-mail address. Internet addresses are in two parts, a ‘domain’ name and a user name separated by an ‘@’ sign. The domain name (more correctly called a hierarchical name) consists of the name of the machine on which the user has an account, along with the network groups and subgroups leading to that computer, thereby giving that machine a unique identification which enables the Internet software to determine where to deliver the message. Delivering the message to the addressee is then up to the named computer. The computer’s name is chosen locally and is often colourful or thematic. User names can be cryptic. They are often composed of first initial and last name but can be shortened to a nickname or identifying numbers. All Internet alphanumeric addresses are actually aliases for numeric addresses, such as 126.96.36.199. The alphanumeric addresses are used because, even though they can be hard to interpret, they are easier than the numeric names. Machines on the Internet called name servers handle the translation of alphanumeric names into numeric addresses. To bypass the cumbersome Internet address of a person or persons, many mail programs enable users to create aliases. Aliases are particularly helpful when e-mail is sent to a group of people.
E-mail is such an inexpensive form of communication, and it is so easy to send copies of messages to long distribution lists, that recipients may get much mail which is of little or no value to them. As a result, new filtering software is being developed to help sort the wanted from the unwanted mail. Users can develop their own filtering rules (such as, if from ‘boss’, display immediately) and can modify them at any time. Techniques such as assigning points to messages to indicate their importance is another variation of the same theme to make e-mail communications among groups more effective—to get relevant information to the recipients with less waste of time on the part of both senders and recipients.
Usenet newsgroups are Internet bulletin boards that are similar to listservers but require the use of software known as a newsreader. There are thousands of Usenet groups and listservers for discussion of medical and health-related topics. A number of groups have already put up information relating to community and public health issues. The Institute of Maternal and Child Health policy at the University of Florida has provided information through the Maternal and Child Health Network. The topics include items such as vaccination requirements, injury prevention summaries, school health information, and child health policy documents. Internet resources for the hearing impaired, including newsletters, software, and demographic data, are available through a number of sites. As well as providing services for the hearing impaired, the Internet can be used as an enabling technology, especially for individuals who are homebound or live in an institution. Other types of community health information available include breast cancer support and poison control information.
The potential of the Internet for public health cannot be overstated. The World Health Organization (WHO) says it receives as many as five unconfirmed rumours a week of new infectious disease outbreaks, by telephone, newspaper, or e-mail. Each rumour is then investigated and a rumour/outbreak list is sent out electronically on a need-to-know basis to relevant personnel at the WHO, its collaborating centres, and other public health authorities. These reports, however, are not intended for public consumption. The WHO will only post news of an outbreak on the public web page of the Emerging and Communicable Diseases (EMC) after confirmation. Because confirmation often requires sending specimens to a laboratory that may be outside the country of origin, the WHO system is notoriously slow at alerting the world at large to outbreaks.
Public health experts who want their news immediately have learned to rely on a web site known as ProMED-Mail (the name represents Program for Monitoring Emerging Diseases). Founded by New York State Health Department epidemiologist Jack Woodall, ProMED-Mail is an Internet e-mail based system connected by satellite to ground stations and Internet nodes throughout the world. Anyone can subscribe. In the 4 years since it went online, ProMED-Mail has grown from 40 subscribers in seven countries to 15 000 in 150 countries and is now considered by experts to be an indispensable, although not wholly reliable, medium for transmitting news of outbreaks and connecting health experts to the far corners of the globe (ProMED-Mail http://www.healthnet.org/) (Taubes 1998).
However, it is not devoid of controversy. The fact that ProMED provides such rapid access is a great strength, yet that same rapidity of access can breed problems with quality control—these are the two sides of a coin in the electronic age. ProMED staff are overworked and underfunded, and they cannot fact-check every entry that is made. This unavoidable situation led to criticism that ProMED-Mail disseminates rumours. David Heymann, who directs the WHO’s Division of EMC, calls ProMED a ‘very valuable service’, but adds that the WHO does not participate in their discussions, because the organization is ‘not in a position to discuss rumors with the general public’. The WHO goal, he says, is to get the rumours and check them out (Anonymous 1998).
On the increasingly common applications of the Internet for public health is Public Health Focus Team (http://www.pelican.gmpo.gov/pubhealt.html) whose purpose is to facilitate actions to minimize adverse health effects resulting from consumption of seafood harvested from the Gulf of Mexico or from contact with its waters. The Health Canada web site (http://www.hc-sc.gc.ca/english/promo.htm) is but one of many similar ones around the world designed to assist the public at large to readily access information on healthy living and health promotion.
Another Internet tool is Telnet with which users can log in to other computers around the Internet. Through Telnet a person can access other computer sites using his or her own computer as a terminal. This is particularly useful for accessing medical libraries and other health-care database systems that are linked to the Internet.
File transfer protocol is the method by which specific computers transfer data or files around the Internet. Files can be simple text, usually known as ASCII files, or more complex data such as graphics or computer programs, known as binary files. The ability to pull down a file, to get data, or to run a program (if the file is executable) is vital for people doing research and development work. The Internet transfers files at a rate of millions of bytes per second, and with the coming of the National Research and Education Network, that will soon be upgraded to gigabytes (thousands of millions of bytes) per second. File transfer protocol can do more than just retrieve files. It can be used to transfer files to remote machines from a given computer. To make it a practical tool, file transfer protocol includes commands of listing directories, listing files in directories, changing directories, and getting information about what is being done and setting parameters for how the operations will be done. Many pieces of free software can be obtained from around the Internet via anonymous file transfer protocols, which allow users to log in to file transfer protocol sites where they do not have accounts. These anonymous file transfer protocol sites together contain millions of files that add up to terabytes of information.
The World Wide Web is the newest and perhaps the most powerful Internet service. It provides links to information via hypertext and, for those who have the proper type of Internet access, it can bring multimedia Internet to the desktop. Hypertext provides links to other information sources through selected, or highlighted words within a text. A person simply chooses the highlighted word to receive further facts on the topic of interest. The links could be of data located on the same machine or anywhere else on the Internet. According to one study, the World Wide Web is estimated to contain approximately 800 million pages of publicly accessible information. As if the Web’s immense size was not enough, it continues to grow at an exponential rate, tripling in size every 2 years.
The two basic approaches to searching the Web are search engines and subject directories. Search engines allow the user to enter key words that are run against a database (most often created automatically, by ‘spiders’ or ‘robots’). Based on a combination of criteria (established by the user and/or the search engine), the search engine retrieves World Wide Web documents from its database that match the key words entered by the searcher. It is important to note that the search engine is not searching the Internet ‘live’, as it exists at this very moment. Rather, it is searching a fixed database that has been compiled some time previous to your search. While all search engines are intended to perform the same task, each goes about this task in a different way, which leads to sometimes amazingly different results. Factors that influence results include the size of the database, the frequency of updating, and the search capabilities. Search engines also differ in their search speed, the design of the search interface, the way in which they display results, and the amount of help they offer.
In most cases, search engines are best used to locate a specific piece of information, such as a known document, an image, or a computer program, rather than a general subject. Examples of search engines include:
Northern Light (http://www.northernlight.com/).
The growth in the number of search engines has led to the creation of ‘meta’ search tools, often referred to as multithreaded search engines. These search engines allow the user to search multiple databases simultaneously, via a single interface. While they do not offer the same level of control over the search interface and search logic as do individual search engines, most of the multithreaded engines are very fast. Recently, the capabilities of meta tools have been improved to include such useful features as the ability to sort results by site, type of resource, or domain, the ability to select which search engines to include, and the ability to modify results. These modifications have greatly increased the effectiveness and utility of the meta tools. Popular multithreaded search engines include:
Subject-specific search engines do not attempt to index the entire Web. Instead, they focus on searching for websites or pages within a defined subject area, geographical area, or type of resource. Because these specialized search engines aim for depth of coverage within a single area, rather than breadth of coverage across subjects, they are often able to index documents that are not included even in the largest search engine databases. For this reason, they offer a useful starting point for certain searches.
Subject directories are hierarchically organized indexes of subject categories that allow the web searcher to browse through lists of websites by subject in search of relevant information. They are compiled and maintained by humans and many include a search engine for searching their own database. Subject directory databases tend to be smaller than those of the search engines, which means that result lists tend to be smaller as well. However, there are other differences between search engines and subject directories that can lead to the latter producing more relevant results. For example, while a search engine typically indexes every page of a given website, a subject directory is more likely to provide a link only to the site’s home page. Furthermore, because their maintenance includes human intervention, subject directories greatly reduce the probability of retrieving results out of context.
Because subject directories are arranged by category and because they usually return links to the top level of a website rather than to individual pages, they lend themselves best to searching for information about a general subject, rather than for a specific piece of information. Examples of subject directories include:
Open Directory (http://dmoz.org/)
Owing to the Web’s immense size and constant transformation, keeping up with important sites in all subject areas is humanly impossible. Therefore a guide compiled by a subject specialist to important resources in his or her area of expertise is more likely than a general subject directory to produce relevant information and is usually more comprehensive than a general guide. Such guides exist for virtually every topic. For example, Voice of the Shuttle (http://vos.ucsb.edu/) provides an excellent starting point for humanities research. Just as multithreaded search engines attempt to provide simultaneous access to a number of different search engines, some websites act as collections or clearing houses of specialized subject directories. Many of these sites offer reviews and annotations of the subject directories included and most work on the principle of allowing subject experts to maintain the individual subject directories. Some clearing houses maintain the specialized guides on their own website while others link to guides located at various remote sites. Examples of clearing houses include:
Argus Clearinghouse (http://www.clearinghouse.net/)
Virtual Library (http://www.vlib.org/).
Hooking up to the Internet
Not all Internet connections are the same; some allow users only to access certain types of Internet tools and are available in host-based dial-up interfaces. Other, usually commercial, Internet connections can enable users to employ Windows- or Macintosh-based software to access the Internet. Although these commercial connections can be more expensive than other options, the ease of use that these interfaces offer make them an attractive option, especially for first-time users. The three most common methods of accessing the Internet are through a university affiliation, via a community access bulletin board known as Freenet, or through a commercial service provider.
Universities were the first large-scale users of the Internet; thus virtually all schools within a typical university have some form of Internet connectivity. Students, faculties, and groups with university affiliations may be eligible for some level of Internet connectivity through their institution. Most universities will give those who are eligible an account on a computer, which has a connection to the Internet. Dialling in to another machine at the university generally accesses university Internet accounts over a modem on a personal computer either at home or on campus. University accounts can be a good place to start to navigate the Internet, especially if they are available for free as part of a university affiliation—most universities do not expect their users to pay for the Internet access. However, difficulties mastering unfriendly interfaces and lack of user support have been known to cause problems for university-based users.
Freenets are community-based electronic bulletin boards that allow users Internet access. Freenets are relatively new tools, but more are coming on-line every month. The Freenet system is menu driven and is set up with the novice user in mind. They are, as the name implies, free to use; however, donations are strongly encouraged to help offset operating costs. As well as being able to attain local information, including health-care resources, registered users can access e-mail and other Internet tools. Freenets are not intended for business or commercial ventures and users are limited in the scope of tools they can use. For example, the use of file transfer protocol and Telnet are limited on most Freenets.
Commercial Internet accounts
Commercial service prices can vary greatly depending on the provider and the type of services used. The four most common types of Internet access that commercial providers supply are as follows:
Dial-up host access—similar to a university account with access through a text-based computing environment. However, interfaces and user support may be better.
Dial-up serial line Internet protocol and point-to-point protocol access—serial line Internet protocol and point-to-point protocol allow full Internet access over a modem and telephone line; thus users can employ interfaces that reside directly on their own computer. This is especially useful for neophytes because it means they can make use of Windows and Macintosh graphical interfaces, many of which are free to access Internet tools. Serial line Internet protocol and point-to-point protocol can also be used with multimedia Internet through the World Wide Web and browsers such as Mosaic.
Dedicated serial line Internet protocol and point-to-point protocol access—in this case a dedicated serial line Internet protocol and point-to-point protocol account is open for use 24 hours a day.
Dedicated link—used to connect an entire local area network to the Internet and/or be connected to computer(s), which will act as Internet information servers. These links can be quite expensive and are usually only feasible at an organizational level.
The Internet was originally developed so that science and research could share resources. To a great extent, communications in the form of e-mail and discussion groups have overshadowed the Internet use for resource sharing. Although the traditional methods of scholarly communication—presentations at conferences, publishing of papers in journals, and so on—have not been eliminated, they are being recognized as inadequate for current research needs. The Internet distributes information in a way that is infinitely more flexible and more timely. Findings, papers, and information can be instantly shared and discussed.
The Internet can provide an innovative solution for meeting a variety of communication needs within the public health communications. As services and tools expand and improve, more ways of applying the technology will continue to be found; the Internet will probably soon become a service from the telephone company. It is, however, important to remember that the Internet is not about technology, it is about people. The tools and applications are only as valuable as the people they enable and empower to communicate.
Computer-based group support systems
There are a varied and growing number of computer-based group support systems including computer conferencing, video and audio teleconferencing, document interchange services, meeting support tools, and group decision support. Perhaps the most common group support system aimed at increasing the effectiveness of communication amongst individuals is computer conferencing.
Computer conferencing is a teleconference that uses computers, software, and communications networks to allow groups of people to exchange ideas, opinions, and information (McNurlin and Sprague 1989). The people in a teleconference may all be located in the same building, or they can be scattered worldwide. Each user signs on to the teleconference (via a terminal or personal computer) at his or her own convenience; there is no need for members of the group to be using it simultaneously, although they may do so if they choose.
Although similar in some ways, computer conferencing systems are different from other types of computerized communication, such as e-mail, bulletin boards, and information retrieval services. E-mail is essentially a one-to-one (or one-to-many) form of communication. Moreover, after a message has been read, it is generally deleted; there tends to be little or no storage of messages. Bulletin boards provide storage but are designed mainly for posting notices for other people to read a one-to-many type of communication. Information retrieval services provide a stored database that users can retrieve from but cannot change.
Computer conferencing systems typically include not only e-mail and bulletin boards but also many-to-many communications, by allowing all participants to join topics and enter comments on the subject being discussed. They provide storage; comments are not deleted after being read. In joining one or more conferences, each time the person logs on, the system tells them how many messages have been entered in those particular conferences since the last time, and will deliver those messages one at a time. If one gets tired of a conference, one can leave it and not receive any more messages from it. Computer conferencing allows the setting up of subconferences for discussing some aspect of the general subject in more detail. Some systems also allow voting, to indicate consensus of opinion.
The benefits of computer conferencing include:
fast exchange of information
encourages more stimulating ideas
provides a written record of discussions
convenient, use it at any time
avoids telephone tag and slowness of mail
handles a dispersed group as easily as a local one
branching allows for special interest discussions and limits junk mail
late joiners can catch up easily
users can settle matters without face-to-face meetings
users like its collaborative nature
supports group interaction, valuable for project management
encourages chance meetings of people with shared interests
fosters cross-fertilization of ideas
managers can participate and be more proactive
allows large numbers of people to interact as equals.
Video conferencing technology is becoming an affordable reality that can substantially increase communications productivity. The products being developed to support ‘personal conferencing’ exploit the power and availability of the workstation and the capabilities of interfaces such as Microsoft Windows, Presentation Manager, X Windows, and Motif. They allow users on either end to share moving video images, voice communications, documents, and even applications across the ‘new’ digital ISDN phone service or over common high-speed local area networks and wide area networks such as Ethernet or Token Ring. They now even work over public data networks such as the Internet (Linthicum 1994). In New Zealand 5 years ago it was estimated that there were around 30 to 40 users of video conferencing. Now it is estimated that around 300 New Zealand organizations dial into video conferencing with systems that range from around US$17 500 to US$70 000. Payback time has become more attractive as managers identify savings in travelling time, which over time can be put into other business areas (Tapsell 1998).
The multimedia revolution is making the presence of speakers and microphones on workstations commonplace. The new mini video cameras are less intimidating than camcorders. The picture that these cameras produce and the sound quality from the audio equipment are surprisingly good. The price of these systems ranges from US$500 to US$20 000, depending on the features and the platform supported.
Video conferencing still has network problems and, until 1993, when desktop units came along, conference rooms systems cost upwards of US$60 000; portable units started at US$25 000 (Strauss 1994). A research report estimated that there were just 14 000 desktop units installed worldwide at the end of 1993, but it expected that the number would soar to 1.96 million by the end of 1998. Even in view of AT & T’s arrangement with Intel to create a personal conferencing gateway, universal video calling does not seem likely for a few years. However, innovative use of the technology is giving early users a competitive edge. As an example, most banks have television cameras at their automated teller machines for security purposes. Some are turning that camera into a two-way videoconference application and staffing the bank remotely. They let customers interact with bank personnel and do all of their banking around the clock at considerably less expense than keeping branch offices open. MEDITrust pharmacy (http://www.meditrust.com/), a Canadian mail-order pharmaceutical firm, has created a virtual pharmacy. It puts a video/phone/data kiosk in a convenience store, connects it to a pharmacist at a remote site over ISDN lines, and sends medicines by mail. The kiosk scans the prescription so that the pharmacist can give advice, stamps the prescription filled, and processes the credit card order. Medicine is sent to a customer’s home by 2-day special mail service. The convenience store may be in a rural village too small to support a full pharmacy. Employers are considering offering their staff a similar service from the company offices.
Group decision support systems
Until recently most of the work in decision support systems has been to help individuals make decisions. However, increasingly in all sectors, and particularly so in public health, decisions are not made by individuals—instead groups of people are involved. Rather than support only communication between members of group, group decision support systems have features and functions that help these groups form a consensus or come to a decision (McNurlin and Sprague 1989).
The desired design of a group decision support system typically includes several features or characteristics. Each participant is able to work independently of the others, and then publicly release or demonstrate his or her personal work. When personal work is released, all group members are able to retrieve and view it. Similarly, each member is able to retrieve and view the work performed by the group as a whole.
Elements of a group decision support system include a database, a group decision support systems model base, specialized application packages, a good user interface, plus the ‘people’ component. The people include not only the participants but also a group facilitator who is responsible for the smooth operation of the group and who may also serve as the operator of the computerized system. Additional features typically include numerical and graphical summarization of ideas and votes, programs for specialized group procedures (such as calculating weights of different alternatives), anonymous recording of ideas, formal selection of a leader, handling progressive rounds of voting, and eliminating redundant input.
Undoubtedly the supporting of communications and decision-making will merge as researchers and developers from both areas begin supporting both. For example, rooms for video conferencing focus on communication support and seldom have workstations to support consensus building. Likewise, group decision rooms with workstations for each participant seldom have video conferencing capability to support dispersed groups. As the tools and technology improve, it is expected that group support will move beyond problems or decisions that are familiar to the participants to sensitive decisions, crisis decisions, confrontation decisions, and even ‘regular’ day-to-day decisions.
Data capturing technology
When the first portable computers (now called notebooks or laptops) arrived on the scene 10 years ago, these 15-kg units were considered saviours for users who needed to take their computers with them. As size began to decrease and the portable computer’s power began to increase, more and more people began to enjoy the benefits of computing ‘on the go’. The notebook is now full featured enough to function as the main computer for many users—all in a package that can weigh approximately 2 kg. Future notebook users can expect to see more features added with no gain in size or weight.
Firstly, lighter materials will be developed that will keep the weight down and more functionality will be integrated into the motherboards, so that additional add-ons are not required. The monochrome screen has disappeared as production yields increase and prices decrease on active-matrix colour LCD displays. Secondly, while nickel–metal hydride batteries are replacing traditional nickel–cadmium cells, lithium batteries are just emerging and are offering longer life and less weight. One of the most important developments in notebooks is the advent of the PCMCIA (Personal Computer Memory Card International Association) card. PCMCIA presents a new paradigm of computing that extends the portability of notebook computers.
These credit-card-sized devices offer ‘plug-and-play’ convenience. They can also be inserted and removed without turning off the computer. PCMCIA cards are available for a variety of functions, including network cards, memory expansion, storage fax/modems, sound cards, and so on. PCMCIA cards have become a regular feature on desktop machines as well. One future scenario has users simply moving their data and applications back and forth between machines on a PCMCIA. Another popular option is the notebook docking station combination. The docking station will become more of a common accessory as users take the components out of their notebooks and plug them into the station at the office or home to obtain the benefits of a larger monitor, external keyboard, alternate pointing devices, better sound, or link onto a network. One aspect of notebook computing is the palmtop computer, which is an even smaller device capable of data connection in the field. Nurses are already using this form of technology, as are physiotherapists, chiropodists, and health visitors in the North West Anglia Health Care Trust in Peterborough in the United Kingdom. Once users have adjusted their work patterns and behaviours, they report that the palmtop leads to a more professional and business-like approach to the job, enabling them to rationalize the workload and manage their time more efficiently, thereby offering a higher standard of patient care than was previously possible (Bradford 1994).
Finally, voice recognition is not far away. This technology has been used for sometime in the field of radiology and is now beginning to appear in other aspects of medicine. Computer systems are now being delivered with speech boards. For dictation systems to run effectively requires a multitasking system that lets several programs run at the same time. One of the programs just sits and listens to what one is saving, looking for either command phrases or phrases to dictate. Doctors testing these new systems find the system accurate and capable of supporting dictation at 60 to 70 words per minute (Mullin 1994). They are finding that it saves time and provides more control over clinical notes, bypassing the normal transcription process.
Data storage and retrieval technology
The power of information technology rests in its ability to process instructions very quickly. One aspect of this increased speed relates to a computer’s ability to store and retrieve data quickly. Primary storage is part of the computer’s central processing unit and is generally referred to as memory. Conversely, secondary storage is physically separated from the central processing unit. A type of secondary storage that is becoming increasingly common and affordable is the optical disk (Hicks 1993).
One type of optical disk shares the same technology as the digital compact disk players used with stereo systems and is referred to as a CD-ROM (compact disk read-only memory). Originally, data could be written on them only once; however, there are now erasable versions. The primary advantage of optical disks is large storage capacity at low costs; some of them cost less than $10 and hold 200 to 2000 megabytes (1 gigabyte) of data. Five hundred megabytes is the equivalent of 300 000 double-spaced typewritten pages, or the entire Encyclopaedia Britannica several times over.
Optical disks are used for storing large volumes of data, including photographs that are not changed often. The Medline CD-ROM, available in most medical libraries, is a well-known medical example. A number of medical specialties receive their journal references on CD-ROMs and access the material at home on their personal computers. SAM-CD is the CD-ROM version of Scientific American’s reference book on internal medicine. Scientific American sends out a CD-ROM every 3 months with the new data (including photographs) in place and ready to use. The Compendium of Pharmaceuticals and Specialties is available on CD-ROM, which offers users the power of the computer in conducting complex searches in a matter of seconds. New and more flexible software is being developed, as CD-ROM technology will soon become a standard component of all personal computers, much as hard disks became standard in the late 1980s. In the world of high-speed technology, CD-ROM drives are considered slow; the average access time is 1 s and the average transfer rate is only 300 000 bits of data per second. The newer WORM (write-only read-many) technology is being packaged in multifunction drives or ‘jukeboxes’ and has access times as fast as 45 ms. Still, this is also ‘too slow’, and not far away is holography, a technique for recording and then reproducing a complete image of a three-dimensional object and the next great technology in data storage. In August 1994, researchers at Stanford University reported the first digital holographic storage system. The team believes that 120 billion bytes can be stored per cubic centimetre using digital holographic storage and access rates will be significantly faster than today’s technology.
Given the ‘information revolution’ our society is experiencing, it is not uncommon to assume that ‘information’ infers only the involvement of computers and communication technology. In organizational settings, one often further assumes that the major issue involved is the introduction of information technology within the organization. What is often overlooked is that the introduction of information technology in an organization is much more of a social than a technical process. If the people involved in information management are to co-ordinate the acquisition and provision of information effectively, they must understand how people process information, both as individuals and as members of organized groups or units. The real challenges in implementing successful information systems are those of managing people and their perceptions.
An information system connects, classifies, processes, and stores data, and retrieves, distributes, and communicates data to decision-makers. This processed data may or may not then be transformed into information by the human decision-maker. In an organizational setting such systems are often called management information systems. This view was promulgated by Davis (1983) when he defined a management information system as an integrated man–machine system for providing information to support the operations, management, and decision-making functions in any organization. The system uses computer hardware and software, manual procedures, management and decision models, and a database. In many ways, information systems are an extension of the study of organizations, organizational systems, organizational behaviour, organizational functions, and management. An organization is an administrative and functional structure of human resources, material, and natural and information resources co-ordinated in some manner to achieve a purpose.
Since the 1990s, the once traditional organization has quickly been replaced by the ‘virtual corporation’ (Davidow and Malone 1992). Whether real or virtual, any organization is held together by the methodologies of acquiring, processing, retaining, transmitting, and utilizing information. The purpose of an information system is to support managerial activities of all types at all levels of an organization. An organizationally based information system acquires, processes, stores, and transmits raw material which is usually a mixture of (a) factual data, (b) material that has been subjected to interpretation in its passage through the system, and (c) other content that is openly acknowledged to be the opinions, judgements, and observations of individuals both within the organization and outside it. The value of this material, that is information, depends upon the use to which it can be put. Measuring information, decision-making, and productivity in information processing is an unresolved problem. Information is an essential commodity and a unique resource. It is often not depreciable and a ‘purchaser’ may not be able to determine the value of an information item without examining it. Information is not a ‘free good’. It is a resource no less essential to the survival of an organization than are personnel, material, and natural resources. Information is a resource that must be conserved, recycled, and protected. As with any other resource, it must be managed.
Increasingly, organizations are coming to accept this premise and hence look for people who view information management from in ‘information science’ versus a ‘computer science’ perspective. The two perspectives are related but by no means the same. People with computer science backgrounds tend to be more concerned with computer hardware and software. Their formal education had a strong theoretical and mathematical basis, with particular emphasis in algorithm development. They probably have a thorough grounding in the study of the implementation of algorithms in programming languages which operate on data structures in the environment of hardware. They usually have had little exposure to information requirement analysis and organizational considerations. They have greater expertise in programming, system software, and hardware. People with such a technical background tend to be more machine and technology focused.
People with an information science background or orientation tend to be more concerned with people and the nature of information and information processes in the organization. They are more likely to assess the value of information and its effect on the performance of the decision-makers within the organization. In a health-care setting, they are more likely to be aware of how and why information is communicated between patients, clients, health-care providers, epidemiologists, administrators, evaluators, and planners. The use to which these people put information is the most critical criteria of success of information systems, whether computer based or not.
If information is to be managed, someone has to be the information manager. The future will place new demands for information systems in public health environs. Information exchange between health-care facilities, governments, and other constituencies is becoming more prevalent, and the need for individuals within an organization to share and use the same information is becoming much more common. In this new climate, new professions are emerging—those of health information managers. These individuals will become active in planning, designing, implementing, managing, developing, and deploying information systems to meet the needs of rapidly changing health-care systems. These information systems will vary in complexity from simple central registers, to hospital and/or community data abstracting, and on to complex interinstitutional networked decision support systems. In health-care settings information is needed to support decisions that relate to:
promoting wellness, preventing illness, and curing or ameliorating disease
monitoring, evaluating, controlling, and planning health-care resources
formulating health and social services policy
advancing knowledge through research and disseminating knowledge through education.
An information manager is any individual within an organization who has been given the responsibility to manage the organization’s information. Given the information revolution that today’s society is experiencing, it is not uncommon to assume that information infers the involvement of computers and communication technology. In organizational settings, one often further assumes that the major issue involved is the introduction of information technology within the organization. As noted above, what is sometimes overlooked is that the introduction of information technology is much more a social than a technical process. If information managers are to co-ordinate the acquisition and provision of information effectively, they must understand how people process information both as individuals and as members of organized groups or units. They will need to have excellent interpersonal skills in order to teach, motivate, convince, and influence a variety of people. The real challenges in implementing successful information systems are those of managing people and their perceptions. Information managers will be agents of change—a bridge between older systems and models, and newer technologies and techniques. No matter where they are positioned in the organization, they are usually expected to develop planning processes for aligning all information systems to the strategic direction, objectives, and structure of the organization. This entails co-ordinating all information systems within the organization including computing services, minicomputers and microcomputers, records rooms, office automation, management engineering, voice communication, and other related areas. Determining the investment to be made in information systems and providing a rigorous and disciplined framework for evaluating information benefits versus information costs is also a part of the job. Specific standards and guidelines need to be established for the definition, measurement, use, and disposition of information so that all segments within the organization are operating within the same framework. It is often left to the information manager to explain information technology and the need for new systems to staff at all levels of the organization. This critical educational role is often carried out in conjunction with the development of policies and procedures that ensure the co-ordination and justification of request for personal computers, terminals, office automation devices, and various software packages.
These responsibilities can often be onerous, and those who succeed possess excellent interpersonal, written, and verbal communication skills (that is, an ability to function effectively at the board, senior and middle management, and operational levels of the health-care facility). Effective information managers understand the organization’s mission and the business that it is in. They also understand the complexity and dynamics of health-care delivery, are able to function in multidisciplinary teams and environments, appreciate ‘small p’ and ‘capital P’ politics, and are able to assess political situations. To be effective information managers, they have to be ‘doers’. They have to be able to demonstrate short-term success while making progress on the long-range information systems requirements. To do so they must understand the present and future capabilities of information technology, be technologically credible to their peers and staff, and be able to plan the effective use of information technology in the organization.
Information systems are people and information systems that create change. Information managers must be able to manage change, which includes a sincere appreciation of the effects of change on people. They must be willing and able to teach and educate a wide variety of individuals at all levels of the organization, none of which can be done without having a positive attitude towards users. Effective information managers demonstrate leadership through effective listening, team building, and consensus building. They are creative, innovative, and have a vision of the future. Most of all, they have an honest concern for the organization’s most critical resource—its people.
The health of a nation depends to a certain extent on how well organizations use the resources available to them to promote wellness, prevent illness, and cure disease. The health of an organization depends to a large extent on the effectiveness of the decisions made by its staff; effective decisions require effective managers and information systems, which produce reliable and useful information. The health of an information system is a function of how well it has been defined, designed, implemented, operated, and maintained. Keeping the organization’s information systems healthy is the role of the information manager regardless of his or her title.
In 1992, Scott Morton and his colleagues at the Massachusetts Institute of Technology’s Sloan School of Management Research published a textbook entitled The Corporation of the ’90s (Scott Morton 1992). The work was a 5-year multimillion-dollar research programme on how organizations can make better use of information technology.
A consortium of Massachusetts Institute of Technology faculty and 12 corporate and public sector sponsors contributed financial resources, advice, and their workplaces as experimental sites. The group’s focus was about how new technologies are changing the way people work and the way organizations will collaborate and compete. The major findings have had a significant impact on a multitude of organizations in both the private and public sectors around the world, including the United Kingdom’s National Health Service (NHS). The Massachusetts Institute of Technology findings are summarized under the following headings.
Fundamental changes due to changes in information technology
Information technology is enabling fundamental changes in the way work is done. The degree to which a person is affected is determined by how much their work is based on information; that is, information on what product to make or service to deliver and how to do it (production task), as well as when to do it and in conjunction with whom (co-ordination task). The impact on production work is apparent in:
physical production—affected by robotic, process control, intelligent sensors
information production—affected by data processing computers for clerical tasks such as invoicing
knowledge production—affected by computer-assisted design/computer-assisted manufacturing.
What is less well known is that the new information technology is permitting a change in the economics and functionality of the co-ordinating process as distance can be shrunk towards zero as far as information flow is concerned. Time can shrink to zero or shift to a more convenient point. Organizational memory, as exemplified by the common database, can be updated by anyone and made available to all authorized users. New ‘group work’ and team concepts combine all three aspects of co-ordination: distance, time, and memory. The increasing availability of information technology can fundamentally change management work as relevant and timely information on changes in the external environment and the organization’s view of the environment affects the direction dimension. Relevant and timely information on measuring the organization’s performance against critical success factors affects the control dimensions. The second aspect of control is interpreting such measures against the corporate plan and determining what actions to take.
Integration of business functions
Information technology is enabling the integration of business functions within and between organizations. Public and private telecommunication networks are making the principle of ‘any information, at any time, anywhere, and at any way you want to look at it’ economically feasible. The boundaries of organizations are becoming more permeable; where works gets done, when, and with whom is changing. Electronic integration is surfacing in the following forms:
within the value chain—land area networks permit ‘teams’ to work together on a common product
end-to-end links of value chains between organizations—electronic data interchange and ‘just-in-time’ systems are shifting the boundaries of an organization to include elements of other organizations thereby creating a ‘virtual’ organization
value chain substitution via subcontract or alliance—permit an organization to take advantage of (mutual) economies of scale and unique skills of its partner organization.
Electronic integration is removing unproductive buffers and leveraging expertise.
Shifts in the competitive climate
Information technology is causing shifts in the competitive climate of many industries. Information technology is introducing unprecedented degrees of simultaneous competition and collaboration between firms. It is becoming increasingly important to know when to support standards and when to try to pre-empt competitors by establishing a proprietary de facto standard. The benefits do not flow from the mere use of information technology but arise from the human, organizational, and system innovations that are added on to the original business benefit. Information technology is merely an enabler that offers an organization the opportunity to invest vigorously in added innovations if it wishes to stay ahead of its competitors.
New strategic opportunities
Information technology presents new strategic opportunities for organizations that reassess their mission and objectives. Organizations are going to go through three distinctive stages as they attempt to respond to their changing environments.
Automate—reduce the cost of production, usually by reducing the number of workers. As an example, scanners, bar code, and universal product codes are being introduced for more than identifying goods.
Informate—what happens when automated processes yield information as a byproduct. This necessitates that knowledge workers develop new skills to work with new information tools; it often entails new ways of thinking.
Transform—a stage characterized by leadership, vision, and a sustained process of organization empowerment. It includes the broad view of quality but goes beyond this to address the unique opportunities presented by the environment and enabled by information technology.
Production workers will become analysers, a role offering a different level of conceptual skill from what was needed before as a ‘doer’ or machine minder; it will require an ability to see patterns and understand the overall process rather than just looking at controlling information on a screen.
Successful application of information technology will require changes in management and organizational structure Information technology is enabling a break-up or disintegration of traditional organizational forms, multiple skills can be brought together at an arbitrary point in time and location. The ability of information technology to effect co-ordination by shrinking time and distance permits an organization to respond more quickly and accurately to the marketplace. This not only reduces assets that the organization has tied up but improves quality as seen by the customer. The ‘metabolic’ rate of the organization, that is, the rate at which information flows and decisions are made, is speeding up and will become faster in the next millennium. The measurements, rewards, incentives, and required skills all require rethinking in the new information technology-impacted world.
Management of public health organizations: global competition
A major challenge for management in this millennium will be to lead their organizations through the transformation necessary to prosper in the globally competitive environment. Management must ensure that the forces influencing change move through time to accomplish the organization’s objectives. Evidence to date is that, at the aggregate level, information technology has not improved profitability or productivity. Some of the reasons are:
benefits are there but simply not visible
improvement is in lower prices or better quality
investment in information technology is necessary to stay in business
the external world is demanding more
use of information technology in low pay-off areas
information technology is laid on top of existing services
no cost reduction, just cost replacement.
To go through the transformation process successfully, organizations must have a clear business purpose and a vision of what the organization is to become; a large amount of time and effort must be invested to enable the organization to understand where it is going and why. The organization must have a robust information technology infrastructure in place, including electronic networks and understood standards; the organization must invest heavily and early enough in human resources—all employees must have a sense of empowerment. Last, but by no means least, understanding one’s organizational culture and knowing what it means to have an innovative culture is the first key step in a move towards an adaptive organization.
Case study: the United Kingdom’s NHS information management and technology strategy
One public service organization which has adopted the Massachusetts Institute of Technology findings as a cornerstone to its corporate strategy, is the United Kingdom’s NHS. The goal of the NHS Management Executive is to create a better health service for the nation in three ways:
ensuring services are of the highest quality and responsive to the needs and wishes of patients
ensuring that health services are effectively targeted so as to improve the health of local populations
improving the efficiency of the services so that as great a volume of well-targeted effective services as possible is provided from the available resources.
The July 1992 White Paper, The Health of the Nation (Department of Health 1992), identified five priority areas and established key targets such as:
reducing deaths from coronary heart disease in the under-65 age group by at least 40 per cent by the year 2000
reducing cervical cancer by at least 20 per cent by the year 2000
reducing suicides by at least 15 per cent by the year 2000
reducing gonorrhoea by at least 20 per cent by 1995
reducing deaths from accidents among children under 15 by at least 33 per cent by 2005.
A strengthened information and research capability at central and regional levels was an essential component of the business plan. Expanded or new health surveys and epidemiological overviews to improve baseline statistics on the health of the population would be undertaken. A Central Health Outcomes Unit would lead on developing and co-ordinating work on assessment of health outcomes. Information systems, which enable adequate monitoring and review, would be developed including a public health information strategy (Ranade 1994). Such was the case in 1992 under a Conservative government. In May 1997, a new Labour government was elected and the policies changed. One of their first efforts was the White Paper Saving Lives: Our Healthier Nation (http://www.doh.gov.uk/ohn/execsum.htm). In it they rejected the previous government’s scattergun targets. Instead they set tougher (their term) but attainable targets in priority areas by the year 2010:
cancer: to reduce the death rate in people under 75 by at least a fifth
coronary heart disease and stroke: to reduce the death rate in people under 75 by at least two-fifths
accidents: to reduce the death rate by at least a fifth and serious injury by at least a tenth
mental illness: to reduce the death rates from suicide and undetermined injury by at least a fifth.
The NHS Information Management and Technology Strategy
In 1992, any public health information strategy was to be a part of the NHS Information Management and Technology Strategy, which was to respond to the business needs of the NHS to see best benefit and value for money from information management and technology investment. It was to set the direction for computerization and information sharing across the NHS into the next century. The Strategy was intended to ensure that the implementation of information systems in the NHS was co-ordinated and managed to achieve maximum potential benefits for patients, clinical staff, management, and administrative staff.
The Strategy was intended to support better care and communication through the appropriate use of information management and technology. It was to provide a framework for the connection and exchange of data (Keen 1994). Whether or not it achieved these goals is outside the purview of this chapter. In the opinion of some it failed, while others hold the view that critical infrastructure elements were indeed put into place.
In September 1998, the Labour government released its own information strategy entitled Information for Health: an Information Strategy for the Modern NHS, 1998–2005 (http:www.NHSIA.NHS.UK/). The purpose of this information strategy is to ensure that information is used to help patients receive the best possible care. The Strategy will enable NHS professionals to have the information they need both to provide that care and to play their part in improving the public’s health. The Strategy also aims to ensure that patients, carers, and the public have the information necessary to make ecisions about their own treatment and care, and to influence the shape of health services generally.
The government has set out the following specific objectives to be delivered through the implementation of this strategy over the period 1998 to 2005:
to ensure that patients can be confident that the NHS professionals caring for them have reliable and rapid access, 24 hours a day, to the relevant personal information necessary to support their care
to eliminate unnecessary travel and delay for patients by providing remote on-line access to services, specialists, and care, wherever practicable
to provide access for NHS patients to accredited independent multimedia background information and advice about their condition
to provide every NHS professional with on-line access to the latest local guidance and national evidence on treatment, and the information they need to evaluate the effectiveness of their work and to support their professional development
to ensure the availability of accurate information for managers and planners to support local Health Improvement Programmes and the National Framework for Assessing Performance
to provide fast convenient access for the public to accredited multimedia advice on lifestyle and health, and information to support public involvement in, and understanding of, local and NHS policy development.
NHS Information Management and Technology Principles
The five key principles of the 1998 Strategy are exactly the same as those of 1992.
Information will be person based. Priority will be given to person-based systems where data is connected as part of the process of care. Such systems will hold a health-care record for each individual, which can be uniquely referenced to that person’s new English NHS number and thereby shared with other systems that use the same identifying key.
Systems are to be integrated. Wherever possible information should be entered into a computer only once; seamless care needs seamless information. After that it should be available to authorized NHS employees, with steps taken to protect confidential information from unauthorized access.
Information will be derived from operational systems. Whenever possible, information is to be captured at the point of delivery of care, from systems used by health-care professionals in their day-to-day work. There should be little need for different systems to record management information. Information for management purposes (administrative, financial, research, and so on) should be derived from operational point-of-care systems. Data not connected in a way that helps clinical professionals do their jobs better will not be clinically acceptable and will not be usable for other purposes.
Information must be secure and confidential. While recognizing the need for sharing and accessibility of information across organizations, all systems must recognize and respect the principles of privacy, security, and confidentiality. Great care is being taken to ensure that all the data held in a computer will be available only to those who need to know it and are authorized to know it.
Information will be shared across the NHS. Common standards and NHS-wide networking will allow computers to communicate so that information can be shared between health-care professionals and organizations, again subject to security and the safeguard of confidentiality.
The specific targets of the new 1998 Strategy are:
reaching agreement with the professions on the security of electronic systems and networks carrying patient-identifiable clinical information
developing and implementing a first generation of person-based electronic health records, providing the basis of life-long core clinical information with electronic transfer of patient records between general practitioners
implementing comprehensive integrated clinical systems to support the joint needs of general practitioners and the extended primary care team, either in general practitioner practices or in wider consortia (for example, primary care groups/primary care trusts)
ensuring that all acute hospitals have the ability to undertake patient administration, including booking for planned admissions, with an integrated patient index linked to departmental systems, and capable of supporting clinical orders, results reporting, prescribing, and multiprofessional care pathways
connecting all computerized general practitioner practices to NHSnet
providing 24-hour emergency care access to relevant information from patient records
using NHSnet for appointment booking, referrals, discharge information, radiology and laboratory requests, and results in all parts of the country
the development and implementation of a clear policy on standards in areas such as information management, data structures and contents, and telecommunications, with the backing and participation of all key stakeholders
community prescribing with electronic links to general practitioners and the Prescription Pricing Authority
routinely considering telemedicine and telecare options in all Health Improvement Programmes
offering NHS Direct services to the whole population
establishing local Health Informatics Services and producing costed local implementation strategies
completing essential national infrastructure projects including the networking infrastructure, national applications, and so on
opening a National Electronic Library for Health with accredited clinical reference material on NHSnet accessible by all NHS organizations
planning and delivering education and training in informatics for clinicians and managers.
As a result of yet another round of reforms in the United Kingdom, there is a fundamental change in emphasis of Health Authority information responsibilities (from contracting to public health and service effectiveness) and a need to establish a two-way flow of information between the NHS and the communities it serves. This suggests that the development of Health Authorities’ information capability may need specific attention in the implementation programme for the new NHS, the public health White Paper (in due course), and the implementation of this strategy. The effective use of the informatics skills of current public health practitioners will be particularly important.
Health Authorities and their directors of public health already have access to a variety of nationally produced public health, epidemiological, and mortality data. The data presented in the Public Health Common Data Set is of particular value. The new National Framework will supplement this for assessing performance, which is currently being road-tested. However, the range of the data available needs to be extended if the vision of Our Healthier Nation and the consequent increased responsibilities are to be met. The information that may be needed to assess resistance to antibiotics is an example of the need to keep information requirements under review.
Conclusion—questions to be answered
Every individual has their own view, their own perception of the world around them. This view is a result of their individual backgrounds, cultures, education, and values. Everyone does not perceive that technology will affect them in the same way. As recently as 1994, a survey of nursing students found that over 95 per cent of them felt that they would never speak to a computer or use an expert system. Yet even at that time, voice recognition technology had moved out of the research laboratory and expert systems were routinely being used in a growing number of sectors including health care.
We are witnessing not only the automation of clerical activities, but also the automation of thoughtful technical and clinical work. What are the consequences? Will the responsibility for the production of reliable information rest more with rules incorporated in equipment and on established procedures than on a health professional’s judgement? The acute care sector of health care is undergoing dramatic and radical changes in delivery and management, many of which are the result of new technologies and the realities of modern-day fiscal constraints. The acute care sector is under increasing pressure to account for its actions and to justify its decision-making and its use of resources. Health-care organizations worldwide are in the process of re-engineering and changing the way people do their work.
Is the same degree of change and accountability occurring in the public health sector? Will the public health information systems, which currently support financial accounting and programme delivery, be expected to allow costs to be matched to services provided and monitor productivity? Will the public health practitioner of the future be a multiskilled individual whose method of working is dramatically different than today? If not, why not?
Anonymous (1998). Epidemiology at the Web café. Technology Review, 101, 54.
Bradford, A. (1994). Palmtop practitioners. British Journal of Healthcare Computing, 10, 12–13.
Churchman, C. (1971). The design of inquiring systems. Basic Books, New York.
Davidow, W.H. and Malone M.S. (1992). The virtual corporation: lessons from the world’s most advanced companies. Harper, New York.
Davis, G. (1983). Evolution of information systems as an academic discipline. Administrative Sciences Association of Canada Conference Proceedings, pp. 185–9. WBC Press, Vancouver.
Denning, P. (1999) Talking back to the machine. Copernicus Books, New York.
Department of Health (1992.) The health of the nation. Cmnd 1986, HMSO, London.
Friede, A., et al. (1994). CDC WONDER: a co-operative processing architecture for public health. Journal of American Medical Informatics Association, 1, 303–12.
Hannah, K. (1985). Current trends in health informatics: implications for curriculum planning. Computers in Nursing, North Holland, Amsterdam.
Hicks, J. (1993). Management information systems: a user perspective. West Publishing, New York.
Kaiser, L. (1999). Quantum leaps in healing. Health Forum Journal, 42, 50.
Keen, J. (ed.) (1994). Information management in health services. Open University Press, Buckingham.
Levy, A.H. (1977). Is informatics a basic medical science? In Medinfo 1977 Proceedings (ed. D. Shires and H. Wolfe), pp. 979–81. North Holland, Amsterdam.
Linthicum, D. (1994). Tommy, can you see me? Open Computing, September, 67–8.
McNurlin, B.C. and Sprague, R.H. (1989). Information systems in management practice. Prentice Hall, Englewood Cliffs, NJ.
Meadow, C.T. (1979). Information science and scientists in 2001. Journal of Information Science, 1, 217–21.
Moehr, J.R., et al. (1979). Four specialized curriculums for medical informatics—review after 6 years of experience. Proceedings of the International Conference in Medical Computing, Springer-Verlag, Berlin.
Mullin, S. (1994). Start talking to your computer—three physicians rate IBM’s speech recognition system. Canadian Medical Informatics, 1, 16–17.
Ohlson, K. (1998) Non-United States Net users to dominate by 2002. Computer World, July 16.
Plucauskas, M. (1994). Internet and medicine part 11: hooking up and using the Internet. Canadian Medical Informatics, 1, 28–30.
Protti, D.J. (ed.) (1982). A new under-graduate program in health informatics. AMIA Congress 1982 Proceedings, pp. 241–5. Masson, San Francisco, CA.
Radford, K.J. (1978). Information for strategic decisions. Reston, New York.
Ranade, W. (1994). A future for the NHS: health care in the 1990s. Longmans, Harlow.
Reichertz, P. (1973). Protokoll der Klausurtangung Ausbildungsziele. Methoden in der Medizinischen Informatik, 2, 18–21.
Scott Morton, W. (ed.) (1992). The corporation of the ’90s. Harvard University Press, Cambridge, MA.
Shannon, C.E. and Weaver, W. (1960). The mathematical theory of communication. University of Illinois Press, Urbana, IL.
Shires, D. and Ball, M. (1975). Update on educational activities in medical informatics. Proceedings of the 5th Annual Conference of the Society for Computer Medicine, pp. 52–4. Washington.
Smith, R. and Gibbs, M. (1994). Navigating the Internet. SAMS Publishing, Indiana.
Strauss, P. (1994). Beyond talking heads: videoconferencing makes money. Datamation, 1 October, 38–41.
Tapsell, S. (1998). Telling it like it is with teleconferencing. Management, 45, 65.
Taubes, G. (1998). Virus hunting on the web. Technology Review, 101, 50.
van Bemmel, J.H. (1984). The structure of medical informatics. Medical Informatics, 9, 175–80.
Wiener, N. (1948). Cybernetics. Prentice Hall, Englewood Cliffs, NJ.